Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
Carefully with Per Axbom - AI responsibility in a hyped-up world

AI responsibility in a hyped-up world

Carefully with Per Axbom

03/20/23 • -1 min

plus icon
bookmark
Share icon

It's never more easy to get scammed than during an ongoing hype. It's March 2023 and we're in the middle of one. Rarely have I seen so many people embrace a brand new experimental solution with so little questioning. Right now, it's important to shake off any mass hypnosis and examine the contents of this new bottle of AI that many have started sipping, or have already started refueling their business computers with. Sometimes outside the knowledge of management.

AI, a term that became an academic focus in 1956, has today mostly morphed into a marketing term for technology companies. The research field is still based on a theory that human intelligence can be described so precisely that a machine can be built that completely simulates this intelligence. But the word AI, when we read the paper today, usually describes different types of computational models that, when applied to large amounts of information, are intended to calculate and show a result that is the basis for various forms of predictions, decisions and recommendations.

Clearly weak points in these computational models then become, for example:

  • how questions are asked of the computational model (you may need to have very specific wording to get the results you want),
  • the information it relies on to make its calculation (often biased or insufficient),
  • how the computational model actually does its calculation (we rarely get to know that because the companies regard it as their proprietary secret sauce, which is referred to as black box), and
  • how the result is presented to the operator* (increasingly as if the machine is a thinking being, or as if it can determine a correct answer from a wrong one).

The operator is the one who uses, or runs, the tool.

Example of explanatory model for AI-driven communication, by Per Axbom.

What we call AI colloquially today is still very far from something that 'thinks' on its own. Even if texts that these tools generate can resemble texts written by humans, this isn't stranger than the fact that the large amount of information that the computational model uses is written by humans. The tools are built to deliver answers that look like human answers, not to actually think like humans.

Or even deliver a correct answer.

It is exciting and titillating to talk about AI as self-determining. But it is also dangerous. Add to this the fact that much of what is marketed and sold as AI today is simply not AI. The term is extremely ambiguous and has a variety of definitions that have also changed over time. This means very favorable conditions for those who want to mislead.

Problems often arise when the philosophical basis of the academic approach is mixed with lofty promises of the technology's excellence by commercial players. And the public, including journalists, of course cannot help but associate the technology with timeless stories about dead things suddenly coming to life.

0:00 /0:21 1×

Clip from the film Frankenstein (1931) where the doctor proclaims that the creature he created is alive. "It's alive!" he shouts again and again.

It's almost like that's the exact association companies want people to make.

We love confident personalities even when they are wrong

Many tech companies seem so obsessed with the idea of ​​a thinking machine that they go out of their way to make their solutions appear thinking and feeling when they really aren't.

With Microsoft's chatbot for Bing, for example, someone decided that in its responses it should randomly shower its operator with emoji symbols. It is the organization's design decision to make the machine more human, of course not something that the chatbot itself "thought of". It is – no matter how boring it sounds and no matter how much you try to make it "human" by having it express personal well-wishes – still an inanimate object without sensations. Even when it is perceived as "speaking its mind".

Example from Microsoft's chatbot showing its use of emojis.The image shows the bot printing text that insinuates that it wishes it was alive.

OpenAI's ChatGPT, in turn, expresses most of its responses with a seemingly incurable assertiveness. Regardless of whether the answers are right or wrong. In its responses, the tool may create references to works that do not exist, attribute to people opinions they never expressed, or repeat offensive sentiments. If you happen to know that it is wrong and point this out, it begs forgiveness. As if the chatbot itself could be remorseful.

Then, in the very next second, it can deliver a completely new and equally incorrect answer.

One problem with the diligent, incorrect answers is of course that it is difficult to know th...

03/20/23 • -1 min

plus icon
bookmark
Share icon

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/carefully-with-per-axbom-276214/ai-responsibility-in-a-hyped-up-world-74269829"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to ai responsibility in a hyped-up world on goodpods" style="width: 225px" /> </a>

Copy