Vernor Vinge? Samuel Butler? Luke Muehlhauser? Anna Salamon? Anders Sandberg? Apocryphal?

Question for Quote Investigator: Current commentators are preoccupied with guessing when artificial intelligence (AI) systems will achieve human-level intelligence, but a thoughtful science fiction author crafted the following edifying dialogue:
“Will computers ever be as smart as humans?”
“Yes, but only briefly.”
The author suggested that the advancement of AI systems would not pause; instead, systems would rapidly achieve superhuman intelligence and continue ascending toward human incomprehensibility. Would you please help me to find the author’s name together with a citation.
Reply from Quote Investigator: In 2008 prize-winning science fiction author Vernor Vinge published “Signs of the Singularity” in the journal “IEEE Spectrum”. Vinge discussed the implications of AI:1
The consequences of creating human-level artificial intelligence would be profound, but it would still be explainable to present-day humans like you and me.
But what happens a year or two after that? The best answer to the question, “Will computers ever be as smart as humans?” is probably “Yes, but only briefly.”
For most of us, the hard part is believing that machines could ever reach parity. If that does happen, then the development of superhuman performance seems very likely—and that is the singularity. In its simplest form, this might be achieved by “running the processor clock faster” on machines that were already at human parity. I call such creatures “weakly superhuman,” since they should be understandable if we had enough time to analyze their behavior.
Below are additional selected citations in chronological order.
Continue reading “Dialogue Origin: “Will Computers Ever Be as Smart as Humans?” “Yes, But Only Briefly””