AI: Are you an Optoomer, Doomer or Ambivoomer?

DALL-E 2: neural network on a blue background with friendly cyborg head in foreground
DALL-E 2: neural network on a blue background with friendly cyborg head in foreground

Are you, like us, uncertain about the future impacts of AI? 

One morning you’re an Optoomer (my word: “optimistic person”). By afternoon, a Doomer. Come nightfall, an Ambivoomer (my word, too: “ambivalent person”).

I laughed at this description of AI from Jacob Stern in The Atlantic.

…these multibillion-dollar programs, which require several city blocks’ worth of energy to run, may now be able to code websites, plan vacations, and draft company-wide emails in the style of William Faulkner. But they have the memory of a goldfish.

Goldfish with a three second memory

As we learned asking ChatGPT about the Dolomites. Every time you start a new conversation with the bot, you’re starting from scratch.

We’d already planned our trip but asked the bot for the best moderate hikes (all five on its list were ones we’d chosen), our style of accommodation (none of its places appealed to us) and foodie restaurants (one was on our list, two were too pricey, one was permanently closed). Its itinerary didn’t include its own recommendations, even on the second and third prompts! It has the memory of a fry, a baby goldfish. (For now, that is.) I gave up, smug about our itinerary. But I can see where future travel research could be measured in minutes, not days.

A picture of us 4 months from now hiking an AI generated Tre Cime di Lavaredo during sunset’s golden hour

There was some value in its response to Magellan’s ask about speed cameras and fines. “In general, fines for minor offenses such as slightly exceeding the speed limit can start from around €40, while more serious offenses such as driving at high speeds in residential areas or construction zones can result in fines of several hundred euros or more.”

We moved on to another trip.

“We’re going to Saskatoon Saskatchewan from August 9-13 for a celebration of guys who are all turning 75 this year. Spouses will be attending, too. What do you suggest we do?“ I asked ChatGPT. 

Of its seven suggestions, Wanuskewin Heritage Park, Meewasin Trail, a riverboat tour, Remai Modern, the Broadway District and local festivals, we wouldn’t have thought about going to the Ukrainian Museum of Canada. 

Team Expo 67 in front of an AI generated background – Saskatoon’s Broadway Bridge framing Bessborough Hotel in a watercolour picture

Saskatonians—do you agree with its suggestions of bars? Hose & Hydrant Brewing Company, The Yard and Flagon, Ayden Kitchen & Bar, Leopold’s Tavern and Lucky Bastard Distillers?

Even I knew The Hollows was closed, one of ChatGPT’s five restaurant recommendations. And why didn’t it include Hearth?

La Presse reported that when a columnist asked for tourist recommendations in Montreal, chatbot “invented a venue, gave wrong directions, and was continually apologizing for providing bad information.” 

Travel planning is not its forte. Yet. The operative word. 

According to the Brookings Institute, global AI is now a US$119.8 billion market that will rise to US$1.6 trillion by 2030. Microsoft alone has reportedly invested US$10 billion in OpenAI.

Recommending hiring decisions, choosing which students get into university, detecting plagiarism, deciding benefit claims—in a constellation of areas AI is busy, busy, busy. Never weary, cranky or demanding.

It already drives the financial sector, deciding on loans, managing portfolios, calculating taxes, detecting fraud patterns and banking by voice command. (A bot at RBC popped this written question to me: “Do you know how much money you spend at Benton Brothers each month?” AI knows we’re cheeseheads.)

In healthcare AI has been a boon to diagnosis, outperforming human radiologists for the past decade and now challenging oncologists and cardiologists—a computer designed in Singapore can look into your eyes and predict your risk of having a heart attack. For concerns like “what do I do if I swallow a toothpick,” 80% of the time patients preferred the response from ‘GPT compared to that of a GP.

AI has composed hundreds of pieces of journalism for Associated Press and countless other outlets. 

Beyond surveillance, in national security it’s used to analyze outcomes and endorse new tactics. 

In the legal profession (ChatGPT passed the bar exam in March) AI is already guiding judgements. It’s estimated 44% of jobs in this sector are vulnerable. 

Cities use AI for all sorts of management issues, like telling the fire department how to respond to calls. This August, Kelowna will start using it to analyze applications for construction and renovations, check for compliance and issue permitscutting the process to minutes instead of months.

It’s said the only jobs that will remain unscathed are those involving extreme physical labour and dexterity, those requiring the human touch, and those depending on creativity, intuition and empathy.

Robots writing novels

AI researchers and tech leaders, including Chief Twit Elon Musk, are calling for pause because of AI’s risks to society, humanity and democracy. They want a legal framework, industry safety standards and the oversight of independent experts. In Ian Brown’s superb Globe and Mail article, “The peril and promise of artificial intelligence”, he quotes Rijul Gupta, the 30-year-old CEO of DeepMedia AI.

It’s rare for an executive of a synthetic media company to say we need regulation in the space. But I just don’t see a world where this technology is protected against unethical uses without some type of government intervention. It’s very similar to social media, which claimed they would regulate themselves. And it was only after a lot of disasters happened that government tried to regulate them. By that point, it was too late.

But as Cal Newport in The New Yorker, says:

We would need a Borgesian library filled with rules tailored for a near-infinite number of esoteric topics, themes, styles, and demands… If the data that define GPT-3’s underlying program were printed out, they would require hundreds of thousands of average-length books to store.

Italy became the first western country to ban ChatGPT. Good luck to the carabinieri in enforcement.

Given you can prompt an image generator with “in the style of (Kusama, Bansky, Hockney…) it’s not surprising that artists are taking tech to court for infringement of the three C’s: copyright, compensation and creditation. Kyle Chayka, a writer at The New Yorker, says most AI-generated art is banal, lacks texture. Can you tell, without looking at the captions, which art piece was hand-drawn?

DALL-E 2 vs a 7-year-old artist

More than rules for AI’s usage, shouldn’t we be worried that there are no regulations for its development? 

I often think of who is behind AI, concentrating enormous wealth into fewer and fewer hands. Nerdy guys in Silicon Valley, the heroes who brought us Twitter, Airbnb, Uber, DoorDash…

DALL-E 2: illustration of a nerdy guy in basement programming artificial intelligence
DALL-E 2: illustration of a nerdy guy in basement programming artificial intelligence

Ian Brown quotes one of them who’s been working on AI for 35 years, PhD engineer Ben Bell, president of Eduworks, a company that builds AI training systems for the US Department of Defence. Here’s how Ben describes AI-programming geeks:

My suspicion is that these brilliant people are also, like every one of us, flawed, particularly when it comes to interpersonal dynamics and human interaction. I think they understand and value intellect. A lot of them are flummoxed by all the ways people interact that are not intellectual – the emotional body language, the cues that a lot of these people seem to miss in everyday life. Creating artificially intelligent minds gives them an entity that matches the intellect that they value, but doesn’t exhibit the perplexing tendencies real people exhibit, that they can’t understand or process. Artificially intelligent entities aren’t judgy. And they’re not needy. They have no expectations.

Remember this quote from B.F. Skinner? “The real problem is not whether machines think but whether men do.”  Or these words from “Ronda” commenting on Ian Brown’s article as if she were HAL from Space Odyssey, “If a humanoid like trump can fool 200 million, just think what we can do.”

Try this. What is the third word of this sentence? 

When asked, ChatGPT-4 answered “third” instead of “the.” Which elicited this response from Gary Marcus, professor emeritus of psychology and neural science at New York University and founder of Geometric Intelligence, a machine learning company:

I cannot imagine how we are supposed to achieve ethical and safety ‘alignment’ with a system that cannot understand the word ‘third’ even [with] billions of training examples.

Colin Koopman, a philosopher at Oregon University, says.

Scaremongering about the future is a distraction from the current harm existing artificial intelligence is already causing. 

(And ponder this: Microsoft recently laid off its entire AI ethics team.)

And this—yes Microsoft, I’m picking on you. When people reported strange interactions with ChatGPT-4 after Microsoft installed it on its search engine, the company said the only way to improve this product is to “set it loose and see what happens.”

Commenting on Ian Brown’s article, “Quesca” noted that a few years ago in the UK an AI program to decide who qualified for university was giving preference, when grades were equal, to kids of the aristocracy. Upper-class programmers had encoded that bias. “If AI can get rid of this irrational intelligence I’m all for it. But even AI will probably understand who’s signing the checks!!??

And how will universities detect an AI-generated thesis?

Three scholars in England asked ChatGPT to “write an original academic paper, with references, describing the implications of GPT-3 for assessment in higher education” for a journal. The professors wrote only the subheads but did brief the journal’s editors of their prank. Their paper fooled all its peer reviewers—none of them guessed it was AI-generated. 

AI-generated articles can churn out text that’s full of errors. A writer asked ChatGPT to find a story in The Atlantic about tacos and,

…the bot fabricated the headline “The Enduring Appeal of Tacos” for a story that was never written or published!

Ian wrote about a worrisome issue: people’s inability to distinguish between real human intelligence and AI deepfake intelligence. 

He cited the concerns of Emily Tucker, a human-rights lawyer and professor at Georgetown Law School’s Centre on Privacy and Technology: (1) AI “…can’t ask why. So it can never come close to being human. It does not have the capacity to question its own capacity.” (2) “…machine learning’s decontextualized decision-making is already making humans more machine-like.” 

And how on earth will we rationalize the energy it takes to run these multi-billion-dollar AI training programs and the COblowing out the back end?

The Guardian: Our phones and gadgets are now endangering the planet Illustration: Andrzej Krauze

The MIT Technology Review reported that training the GPT-2 AI model currently used by ChatGPT emits 50 metric tonnes of carbon dioxide equivalent – which is nearly five times the lifetime emissions of an average American car. GPT-2 was trained with 1.5 billion parameters in 2019. Since then GPT-3 was released with 175 billion parameters, emiting 560 metric tonnes. And the just released GPT-4, trained with more than 1 trillion parameters, emits? [big secret of big tech]. Global technology is now responsible for 2% to 4% of global carbon-dioxide-equivalent emissions, which is more than the aviation sector!

Do you remember Saturday Night magazine? Here’s what one of its former editors, Kenneth Whyte, wrote recently in The National Post

The content churned out by today’s media companies will be somewhere between worth less and worthless, making it difficult for those companies to afford original, high-value human journalism.…generative AI has the potential to destroy a lot of value in the literary world without producing a single great work of literature.

In an essay (written when Joe Clark was prime minister and Jimmy Carter headed the US) about the emergence of a new “mechanical kingdom” of life, Lewis Thomas, an American physician, poet, etymologist, essayist, educator, policy advisor and researcher, wrote:

As extensions of the human brain, they have been constructed with the same property of error, spontaneous, uncontrolled, and rich in possibilities.

We leave you with this comment from “MontrealorO” to Ian’s article in the Globe:

As long as AI is programmed by us human, we are safe.

Navigation
UPDATE: June 13, 2024. Brown, Ian. “For Geoffrey Hinton, the godfather of AI, machines are closer to humans than we think.” The Globe and Mail. June 13, 2024.
UPDATE: January 18, 2024. See how you really feel about AI with this incredible TV series (one season; we watched its seven episodes in two days; it’s that good) “A Murder at the End of the World.”
UPDATE: January 10, 2024. “Artificial Intelligence and Emotional Intelligence: Why Writers and Poets Need to be Part of the Conversation on the Future of AI.” Elif Shafak’s Substack
UPDATE: August 19, 2023. In the Guardian, David Runciman discusses, “The end of work: which jobs will survive the AI revolution?”
UPDATE: August 6, 2023. “AIs Will Be Our Mind Children,” more like our descendants, says Robin Hanson in today’s Quillette.
UPDATE: July 25, 2023. n C2C Journal is an article “AI, the Destruction of Thought and the End of Huamities,” by Christopher Snook, a lecturer in the Faculty of Arts and Social Sciences at Dalhousie University in his hometown of Halifax. The map included of Silicon Valley is an eye-opener.
UPDATE: July 11, 2023. In his article “My A.I. Writing Robot” in The New Yorker, Kyle Chayka, one of my favourite writes reviews a few writing bots. I think I’s like Mindsera from Estonia, which tries to be more of an editor than a writer, by using A.I. to give its human users “personalized mentorship and feedback” during the writing process. 
UPDATE: July 6, 2023.Forget governments and self-reglating Silicon Valley behemoths. “In Defense of Humanity” in The Atlantic, Adrienne LaFrance pleads for a cultural and philosophical movement to meet the rise of AI, what she says may be “the most consequential technology in all of human history,” emphasizing that “What defines this next phase of human history must begin with the individual.” “In an age of mater, and snap reactions, and seemingly all-knowing AI, we should put more emphasis on contemplation as a way of being,” she says, a movement that should “prioritize humans above machines and reimagine human relationships with nature and with technology, while still advancing what this technology can do at its best..We need a human renaissance in the age of intelligent machines.” I love her thoughts on travel: “…no technology is as good as going to the place, whatever the destination…This is why you make the trip, you cross the ocean, you watch the sunset…”
UPDATE: June 29, 2023. Forbes has an interesting article on the amount of water ChatGPT uses: it drinks the equivalent of a 500 ml bottle of water for every conversation of 20-50 simple questions! See “AI’s Unsustainable Water Use: How Tech Giants Contribute to Global Water Shortage,” by Federico Guerini, April 14, 2023.
UPDATE: June 5, 2023.Joseph Wilson “Will AI really change everything? Not likely” in The Globe and Mail notes a 2023 survey done by Innovative Research Group for the 2023 Provocative Ideas Festivals shows that 47% of Canadians are concerned about AI, 44% are ambivalent and 9% are excited. “The hype will allow tech companies to pump their valuations sky-high, further concentration capital and technological knowledge in the hands of very few billionaires” and as a result “take advantage of poorly paid temp workers or refuse calls to be transparent with their algorithms, or flood social media with misinformation, or violate copyright laws by scraping the web for data without the permission of its owners.”
UPDATE: May 25, 2023. Zainab Choudhry in The Globe and Mail writes that “AI programs like ChatGPT are built on mass copyright infringement.”
UPDATE: May 15, 2023. Matthew Hutson in The New Yorker asks “Can we stop runaway A.I.?“He Neds his essay with these words: “And yet it may be that researchers’ fear of superintelligence is surpassed only by their curiosity. Will the singularity happen? What will it be like? Will it spell the end of us? Humanity’s insatiable inquisitiveness has propelled science and its technological applications this far. It could be that we can stop the singularity—but only at the cost of curtailing our curiosity. “
UPDATE: May 14, 2023. How did we miss the movie “Her” in 2013? If you did too, carve out two hours to watch this stellar art piece. “The sci-fi love story goes beyond contemporary human-computer interaction by following the virtual romance between a melancholy man and his operating system. In its contemplation of the disparities between computers and humans, the movie offers unconventional lessons about the complications of love.”
UPDATE: May 12, 2023.”AI: Let’s Worry About the Right Things.” Brendan Craig. Quillette. “Current estimates suggest that the average human brain contains around 86 billion neurons. On average, a neuron has 7,000 synapses, so recent calculations suggest around 600 trillion connections in a human neural network. To put that into perspective, current estimates place the number of internet connections worldwide to be around 50 billion. So one human brain contains the equivalent of 12,000 global internets.”
UPDATE: May 5, 2023. Thanks to Pat for linking us to this DALL-E 2 animation, Critterz.
UPDATE: May 2, 2023. “AI pioneer quits Google to warn about the technology’s ‘dangers’.” Jennifer Korn. CNN. Thanks to Terry for forwarding this to us.
UPDATE: May 2, 2023. “Never Give Artificial Intelligence the Nuclear Codes.” Ross Anderson. The Atlantic.
UPDATE: May 1, 2023.“ChatGPT, Lobster Gizzards, and Intelligence.” Frederick R. Prete. May 1, 2023. An explanation of how ChatGPT works and why we’ll never have self-driving cars.
UPDATE: April 30. Thanks to Elaine for linking us to this visual article: “How Smart is ChatGPT” by Marcus Lu , Visual Capitalist, April 26, 2023. AI: You could also open your own OpenAI account. The basic version of ChatGPT is free. DALL-E 2 is also free, but there is a catch. You’re allotted 50 free credits during your first month’s use and 15 free credits after that.

Al-Sibai, Noor and Christian, Jon. “BuzzFeed Is Quietly Publishing Whole AI-Generated Articles, Not Just Quizzes.” Futurism. March 30, 2023. “The 40 or so articles, all of which appear to be SEO-driven travel guides, are comically bland and similar to one another. “

Anderssen, Erin. “We can build better, fairer algorithms in a world of angry bias—so why don’t we?” The Globe and Mail, April 13, 2023. 

Barlow, Scott. “Noteworthy: Scott Barlow on investing opportunities in AI, retail sales numbers and what happens when social media meets obesity treatments.“ The Globe and Mail. April 22, 2023. 

Brown, Ian. “The peril and promise of artificial intelligence.” The Globe and Mail. April 1, 2023. Worth a year’s subscription, Ian’s article is the best we’ve read on AI. 

Castaldo, Joe. “Tech leaders cite ‘profound risks’ as they call for pause on AI.” Globe and Mail. March 30, 2023.

Chayka, Kyle. “Is A.I. art stealing from artists?” The New Yorker. February 10, 2023.

Chocano, Carina. “The Language Game.” The New Yorker. April 24 & May 1, 2023.

Fumano, Dan. “One B.C. city’s answer to speed up housing: Get humans out of the way.” Vancouver Sun, April 23, 2023. 

Halpern, Sue. “What we still don’t know about how AI is trained.” The New Yorker, March 29, 2013. 

Hanson, Robert. “What Are Reasonable AI Fears?” Quillette. April 14, 223. 

Harte, Adian. “The Horseless Comanche.” Quillette. April 20, 2023.

Heikkila, Melissa. “We’re getting a better idea of AI’s true carbon footprint.” MIT Technology Review. November 14, 2022.

Kirkey, Sharon. “ChatGPT bedside advice more ’empathetic’ than MD’s.” The National Post. April 29, 2013.

Knight, Chris. “Artist reveals his prize-winning ‘photo’ is an AI creation”.National Post. April 20, 2023. 

The Leverhulme Centre for the Future of Intelligence. “A highly interdisciplinary research centre addressing the challenges and opportunities posed by artificial intelligence (AI).“ Have a look at the books on AI that they suggest

Lewis, Thomas. “To Err is Human.” The Medusa and the Snail. 1979.

Lovelock, James. Novacene. The Coming Age of Hyperintelligence. Great Britain: Penguin Random House UK, 2019. (Published to coincide with his 100th birthday.) “The most influential scientist and writer since Charles Darwin,” says The Irish Times. This 129-page gem on Gaia and the future of humanity, found on most suggested reading lists about AI, offered me the most intuitive outcome. James says:

..we think and act about 10,000 times faster than plants. The experience of watching your garden grow gives you some idea of how future AI systems will feel when observing human life…Do not be depressed by this. We have played our part. Take consolation from the poet Tennyson…’that which we are, we are.’ That is the wisdom of great age, the acceptance of our impermanence while drawing consolation from the memories of what we did and what, with luck, we might yet do.

Nardi, Christopher. “Ottawa developer’s chatbot aims to help with tax filing.”National Post. April 11, 2023.

Newport, Cal.”What kind of mind does ChatGPT have? The New Yorker. April 13, 2023.  

Pape, Gordon. “Want to know how to profit from AI? Here’s what ChatGPT advises.”The Globe and Mail. April 11, 2023.

Roboto: Did you there’s a type face called Roboto? Google says “Roboto has a dual nature. It has a mechanical skeleton and the forms are largely geometric. At the same time, the font features friendly and open curves. While some grotesks distort their letterforms to force a rigid rhythm, Roboto doesn’t compromise, allowing letters to be settled into their natural width.”

Rosenberg, David and Wendling, Julia. “Casualties of the AI revolution.” National Post. April 11, 2023.

Stern, Jacob. “GPT-4 Has the Memory of a Goldfish.” The Atlantic, March 17, 2023.

Warzel, Charlie. “People Aren’t Falling for AI Trump Photos (Yet).”The Atlantic, March 24, 2023.

Whyte, Kenneth. “ChatGPT and the looming revolution in the book publishing world.”  National Post, March 18, 2023.

Wu, Daniel. “Professors publish AI-generated paper that fools reviewers.” National Post, Washington Post, March 24, 2023. 

10 Responses

  1. Aunt Ethel, wow. Was Lovelock right? Are we humans only a small part of the ongoing evolution of the living earth (Gaia)?

  2. Yikes…I am so far out of this realm..Scary to me in my little sheltered world.
    So true about the “Hey Google” comment from Patrick, see/hear it all the time.
    Makes my head swirl with fright! Happy May Day..Now thats something that Google can look up! Cheers..

    1. Happy May Day to you, the first day of summer in Celtic times, “a celebration of the change of seasons in nature and a time to reflect on how those changes are mirrored in their own lives,” according to Miriam Webster’s feed today—something we can do, without AI—a trait it will take it a longgggg time to program into a bot.

  3. I’m still laughing about having the “memory of a goldfish”. Not for long, I imagine. That is quite the amazing reference list you have compiled. And thanks for testing these AIs (almost intelligent?).
    A colleague of mine used the Chat AI to look up new research about tree ring and climate research. He said it gave him a really nice list of references, but none of them were real!
    For now, you can stump Google/Alexa by asking questions like “what are some swear words in the Plains Cree language”. Try it, maybe it has improved by now.

    1. Almost Intelligence–love it. You’ve likely read Novacene–would be interested in what you think of James Lovelock’s predictions. And didn’t Magellan find the perfect cartoon for the goldfish?

  4. It’s interesting to see how far mankind has come in developing technology in the last 500 years, yet we have the latest rocket blowing up shortly after take off, I am sure we have some IT guidance involved here, but when IT is flawed by lack of human input how can this be called progress. 💸💸💸💸💸💸💸
    🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔

  5. It really is amazing where things are going with AI. A while back I saw this clip about a nurse robot:
    https://www.youtube.com/watch?v=p0ePTSdE2GI
    On the flip side, we recently saw a video showing a Israeli combat robot. The caption for this video was “SCARY” – and it is!
    The speed that AI is being developed and “fine tuned” is amazing. The kids / grandkids these days don’t even look stuff up on a computer (or heaven forbid – a book) – they simply call out – “Hey Google” and 99 % of their queries will be answered by a device that is listening somewhere in the room……

    1. Definitely Doomer here. A society where AI controls AI? In the nineteenth century Shakespeare warned England about the direction of changes that became known as the Industrial Revolution: “The world is too much with
      us …….we have given our hearts away”. What remains when AI takes our minds away?

Leave a Reply

Your email address will not be published. Required fields are marked *

YOU MIGHT ALSO LIKE

Lower Joffre Lake
Destinations
Magellan

Misty

One Saturday morning a few weeks ago we awoke to an email from our granddaughter Clare. She’d been working at a café on Granville Island

Read More »
Benesse House
Art & Architecture
Spice

Sleeping in a Museum

Benesse House Park, round about midnight on the way to our room, a ghostly guy in the shadows near the Hiroshi Sugimoto artwork “That’s good

Read More »
Art & Architecture
Spice

I Wonder as I Wander

Since the Middle Ages, dark interior walls of stone cathedrals have been animated by the communion of light and colour on jewel-toned windows of stained

Read More »
Gloria's mother
Inspirations
Spice

Mothers’ Day

The last time we visited our mothers was in April. The long, Saskatchewan winter stalled out for the week we were there. Flocks of geese

Read More »