In the twentieth century, two Americans made observations which highlighted the frightening growth of humanity’s collective learning.
In 1965, Gordon Moore, co-founder of Intel, made a prediction that came to be known as Moore’s Law. Moore prophesied that the number of transistors on a microchip would double approximately every two years, leading to a sharp decrease in the cost per transistor and an exponential increase in computing power. This has contributed to an explosion in collective learning and been a driving force behind the rapid development of digital technology and artificial intelligence.
American polymath Buckminster Fuller conceived the Knowledge Doubling Curve, which highlighted the acceleration of human knowledge over time. Fuller estimated that up until 1900, knowledge doubled roughly every century. As the twentieth century rolled on, the Knowledge Doubling Curve picked up pace until and according to some estimates, human knowledge now doubles every twelve hours.
The Knowledge Doubling Curve examines the broader expansion of human understanding across all fields, an acceleration can be attributed to various factors, including technological advancements, improved communication, and better access to information. The growth of human knowledge in one area can lead to breakthroughs in others, creating a positive feedback loop that accelerates overall progress.
Some experts believe that, due to the physical limitations miniaturization of microchips, Moore’s Law is a thing of the past. If true, this could also put the brakes on the Knowledge Doubling Curve. But that doesn’t take the current explosion in generative AI into account. At the time of writing in mid-2023, humanity has become enamored of ChatGPT, Dall-E, Midjourney, and other such tools.
Prompt: Generative AI art stealing people's jobs
I’m not immune to this revolution.
When I wrote Lightbulb Moments in Human History: From Cave to Colosseum in 2021, I didn't use generative AI. All illustrations were stock photographs purchased from Adobe Stock or iStock, and edited in Adobe Photoshop. My research was mostly what now passes for ‘old school’: hard-copy books, the internet, and Google Scholar. A little generative AI art found its way into Lightbulb Moments in Human History: From Peasants to Periwigs in 2022, but my research methods remained largely the same.
Fast forward to mid-2023, and my writing process has evolved thanks to generative AI. Almost all illustrations (including those on the cover of the book) have been generated by specific prompts I crafted for Midjourney. And while some of my initial research is being done in ChatGPT, I was loath to trust it, because in my opinion it’s still questionable. More traditional methods of study still make up the bulk of my historical analysis.
I certainly drew the line at letting ChatGPT do the writing for me. I think it’s cheating, and where’s the fun in that?
Comments