After “Oppenheimer” is AI the Next Big Moral Question?

August 03, 2023

If you haven’t already seen “Oppenheimer,” you should go, for sure! The 3-hour run time flew by. Many consider it one of the best movies in film history. The acting, the cinematography, the music, everything in the film is simply outstanding. It is a magnificent study of J. Robert Oppenheimer, who led the Manhattan Project in developing a weapon so powerful it could annihilate everyone on Earth. No surprise, Oppenheimer had a real problem adjusting to the reality he had created for the world and himself. The moral responsibility that Oppenheimer and other brilliant scientists faced 75 years ago is relevant today as we enter the rapidly expanding era of AI- Artificial Intelligence.

First, let’s go back to World War II. America did not enter the war until December 11, 1941, three days after the Japanese bombed Pearl Harbor. But, of course, the U.S. had been monitoring developments well before then. In 1933, Hitler forced many top Jewish scientists, including geniuses like Albert Einstein, out of their jobs. Many fled to the U.S. and England and had a huge impact on U.S. innovation. The Manhattan Project was started in 1942 as a response to fears that the remaining German scientists were close to having a nuclear weapon that the Nazis could use to conquer the world. The Manhattan Project combined various research agencies with the goal of weaponizing nuclear energy as quickly as possible.

Robert Oppenheimer had already been working on the concept of nuclear fission when he was named director of the Los Alamos facility in New Mexico in 1943. Their job was to develop a major bomb to end WW II before the Nazis did. Oppenheimer settled on an atomic bomb rather than a more devastating hydrogen bomb. On July 16, 1945 in New Mexico, the U.S. detonated the first successful atomic bomb- called the Trinity Test- which generated clouds 40,000 feet high and started the Atomic Age. (BTW, the cinematography of the Trinity Test in the movie is amazing).

Ten days later at the Potsdam Conference outside of Berlin, the U.S. provided an ultimatum to Japan- surrender or face “prompt and utter destruction.” Japan’s emperor Hirohito would not surrender. The U.S. military leaders selected Hiroshima as its target to demonstrate U.S. power and encourage Japan to surrender. The “Little Boy” bomb was dropped August 6, 1945 causing death, injuries and destruction over five square miles. Three days later with no surrender agreement in place, the “Fat Boy” bomb was dropped on Nagasaki. The two bombs killed between 100,000 to 200,000 and leveled two cities. The Japanese surrendered the next day, August 10, 1945-78 years ago, almost to the day.

Almost immediately, the debate started- should America have dropped these bombs? Some said we should have warned the target cities first or dropped another “sample bomb.” Others focused on all the American lives that would have been lost if we had to invade Japan to end the war. Regardless, the use of nuclear weapons was spreading. In 1949 the Soviet Union exploded its own atomic bomb. By the 1980s, the U.S. and Soviet Union each had more than 25,000 nuclear weapons- enough to destroy the world many times over. The end of the Cold War reduced nuclear armaments to smaller quantities but still extremely destructive amounts that still exist today.

And now, AI has opened Pandora’s box. While many technological companies had been working on artificial intelligence matters for years, it wasn’t until ChatGPT was launched by Microsoft on November 30, 2022 that the “toothpaste was out of the tube” for AI. Quickly, Google, Baidu and Meta accelerated the development of their competing products. Some people are afraid machines will take over their jobs. Others worry that AI will surpass human intelligence and become a threat to our existence. Sounds a little like a nuclear bomb, doesn’t it?

Mo Gawdat, a former executive at Google X who has spent his career in tech, has written an excellent book, “Scary Smart.” Mo believes that AI can solve some of the world’s biggest problems, such as poverty, climate change and disease. If used correctly, he believes AI can be a force for good. He states that “AI is already more capable and intelligent than humanity.”

However, Mr. Gawdat acknowledges that risks come with it. The potential acceleration of intelligence in machines is unbelievable. ChatGPT-4 has an IQ of 155, almost as much as Einstein’s 160. ChatGPT-5, coming out in four months, is projected to have an IQ over 1,000. In 25 years, AI is predicted to be a billion times more intelligent than humans. AI can work at great speed without distractions. However, because humans design the algorithms that form AI and because we are imperfect, much attention must be given to the development of AI. With this kind of mental power (as compared to atomic power in Oppenheimer’s time) think of the possible damage if AI develops the wrong way. He suggests that we need to be careful to “develop AI and ensure we have safeguards in place to prevent it from becoming a threat.”

Mr. Gawdat expects that while AI is in its infancy, it will develop very quickly. And, as soon as 5-10 years from now, it will have made a great impact on us and the world and may be too powerful to control and disregard human morals. He concludes, “At that point, will AI be used to deliver benefits to ourselves and others, or simply be directed by profits and power for a few?”

AI is a very big deal. Just like the atomic bomb developed by Oppenheimer and The Manhattan Project has been. Both happen very quickly and have a huge impact on the world. It’s incumbent on all of us to keep up to date with AI and do our part to make sure it works for all of us in a positive way.


Detterbeck Wealth Management is a fee-only financial planning / wealth management company with offices located in Palatine, IL (Chicago area) and Charleston, SC areas serving clients locally and across the country. To contact us about setting up an appointment, please see our contact us page