Bing Confabulates Its Own Version of a Classic Hemingway Story

I continue to be fascinated by the topic of AI confabulation. The other day I read the Ernest Hemingway short story “Fifty Grand.” It’s about a boxer who fights a championship bout. I liked the story but was confused by a couple of details in the end. So, I turned to my favorite AI, Bing, which proceeded to hallucinate a whole other version for me.

Of course, I’ve seen AIs make up other scenes from famous literary works before. Bard even confabulated a “woke” version of the poet E.E. Cummings. So, Bing’s summarization of the Hemingway story was not a shock. But it’s worth writing about because of the nature of Bing in particular and the other similar AIs more generally.

Confabulating Parts of Fifty Grand

“Fifty Grand” is a story that hinges on a couple of boxing-related bets: one known, one unknown. Because the unknown bet isn’t made clear, the reader isn’t sure of what happened until the end of the story. To help clarify the ending, I asked Bing about it.

Bing’s summary of the story left out a critical part when a couple of “wise guys” (by the name of Morgan and Steinfelt) come to visit the boxer, Jack, and make some sort of proposal to which the reader isn’t privy.

Bing’s summary includes a number of events that never happened, such a meeting between Jack and his wife, her accusation that he was going to throw the fight, Jack’s broken jaw, and Jack’s trip to the hospital.

Bing didn’t accurately relate events in the story even though it had access to summaries such as the Wikipedia entry about it (to which Bing linked) as well as to the story itself (though in an unsearchable PDF format, which may have been an issue).

Wikipedia Gets It Right

So, did Bing get it wrong because Wikipedia had it wrong? No. Wikipedia summarizes the fight as follows:

The fight begins. Doyle (the narrator) reports, “There wasn’t anybody ever boxed better than Jack,” and the fight goes well for Jack for several rounds as his left fist repeatedly connects with Walcott’s face. By the seventh round, Jack’s left arm gets heavy and Walcott begins to gain the upper hand as he pummels Jack’s torso. After the eleventh round John Collins tells Jack the fight will go to Walcott, but Jack says he thinks he can last through the twelfth round and goes to meet Walcott and “finish it off right to please himself.” Walcott backs Jack up against the ropes and delivers a very low blow; Jack, in obvious pain, stays standing–”if he went down, there go fifty thousand bucks” because he would win on a foul–and tells the referee not to call the foul because it must have been an accident. He walks toward the dumbfounded Walcott and begins swinging, landing several body punches before hitting his opponent twice below the belt. Walcott falls and wins the match on a foul.Back in the dressing room, Jack comments, “It’s funny how fast you can think when it means that much money.”

In a couple of sentences, Wikipedia clarifies why Jack would have lost the money he bet on Walcott (the other fighter): because Walcott’s blow would have been deemed a foul and Jack would have won the fight on a technicality.

This suggests that the shady characters who talked to Jack earlier in the story wanted Jack to win the fight. At the end of the story, the narrator tells us that Walcott was favored to beat Jack, which is why Morgan and Steinfelt wanted to bet and win on Jack (that is, they’d win more money that way).

But it appears that Jack’s agreement with them is that he would lose the fight. That’s why toward the end of the story, Jack’s manager says, “They certainly tried a nice double-cross” and Jack responds with sarcasm, “Your friends Morgan and Steinfelt. You got nice friends.”

So, Morgan and Steinfelt wanted Jack (and most other people) to bet against Jack’s victory so they would make more money when Jack won. In essence, Jack turned the tables on them by making sure he lost the fight even while getting revenge on Walcott for his dirty boxing and treachery.

What Can We Learn About Today’s Neural Networks?

I certainly don’t “blame” Bing for getting a nuanced story wrong. I know that the confabulations boil down to how the algorithms work, as explained in another post. In fact, unlike the other AIs on the market, Bing pointed me to references that, if I hadn’t already read the story, would have allowed me to verify it was giving me the wrong information. That’s the beauty of Bing.

Not Quite Plagiarism

The famous intellectual Noam Chomsky has claimed that the generative AIs are just a form of “high-tech plagiarism.” But that’s not quite right. I don’t know if the story “Fifty Grand” was part of the data on which the Bing model (based on ChatGPT4) was trained. If so, then it wasn’t able to properly parse, compress and “plagiarize” that nuanced information in such as way that it could be accurately related after model training.

But we do know that Bing was able to access (or at least point to) the Wikipedia article as well as an “enotes” summary of the story, so it knew where to find the right plot summary and interpretation. The fact that it still confabulated things indicates that the makers and users of these technologies have some serious issues to address before we can trust whatever the AIs are telling us.

Will Hallucinations Ever Go Away?

There’s some debate about whether the confabulations and hallucinations will ever go away. On one hand are people such as Emily Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory, who has said, “This isn’t fixable. It’s inherent in the mismatch between the technology and the proposed use cases.”

On the other hand are those who think the problems are indeed fixable. Microsoft co-founder Bill Gates said, “I’m optimistic that, over time, AI models can be taught to distinguish fact from fiction.”

Maybe APIs Will Help Fix the Issue

Some think they can address the confabulation problem, at least in part, by better use of APIs (that is, application programming interfaces). By interfacing with other types of programs via APIs, the large language models (LLMs) can develop capabilities that they themselves do not have. It’s like when a human being uses some tool, such as calculator, to solve problems that they would not easily be able to solve by themselves.

That is, in fact, part of the hope associated with Gorilla, a LLaMA-7B model designed specifically for API calls. This particular LLM is a joint project developed researchers from UC Berkeley and Microsoft, and there is now an open-source version available.

So, if Gorilla can more dependably access APIs, it can reduce the hallucination problem.

At least, that’s the hope.

We’ll see over time.

A Summary of the “Godfather of artificial intelligence talks impact and potential of AI” Interview

This is an AI-enabled summary of an interview with cognitive psychologist and computer scientist Geoffrey Hinton. He’s played a big role in the development of computer neural networks and was the guest of Brook Silva-Braga on the CBS Saturday morning show. The YouTube video can be seen at the end of this summary. I added a couple of salient quotes that touch on the “alignment” problem. The art is by Bing’s Image Creator.

Hinton’s Role in AI History

Hinton discusses the current state of artificial intelligence and machine learning. He explains that his core interest is understanding how the brain works and that the current technique used in big models, backpropagation, is not what the human brain is doing. He also discusses the history of AI and neural nets, which he was a proponent of, and how neural nets have proven to be successful despite skepticism from mainstream AI researchers.

The video describes how ChatGPT has vast knowledge compared to a single person due to its ability to absorb large amounts of data over time. The model was first proposed in 1986 and was later able to surpass traditional speech recognition methods thanks to advancements in deep learning and pre-training techniques. Hinton’s background in psychology originally led him to neural networks, and his students’ research resulted in significant developments in speech recognition and object recognition systems.


The interview touches on various topics related to computer science and AI, such as the potential impact on people’s lives, the power consumption differences between biological and digital computers, and the use of AI technology in areas like Google search. Hinton also discusses the challenges of regulating the use of big language models and the need to ensure that AI is developed and used in a way that is beneficial to society (a need he doesn’t feel is being well met).

Silva-Braga: What do you think the chances are of AI just wiping out humanity? Can we put a number on that?

Hinton: It’s somewhere between 1 and 100 percent (laughs). Okay, I think it’s not inconceivable. That’s all I’ll say. I think if we’re sensible, we’ll try and develop it so that it doesn’t, but what worries me is the political situation we’re where it needs everybody to be sensible. There’s a massive political challenge it seems to me, and there’s a massive economic challenge in that you can have a whole lot of individuals who pursue the right course and yet the profit motive of corporations may not be as cautious as the individuals who work for them.

Hinton addresses the common criticism that large language models like GPT-3 are simply autocomplete models. He argues that these models need to understand what is being said to predict the next word accurately. In addition, they discuss the potential for computers to come up with their own ideas to improve themselves and the need for control. Hinton also addresses concerns about job displacement caused by these models, arguing that while jobs will change, people will still need to do the more creative tasks that these models cannot do.

Silva-Braga: Are we close to the computers coming up with their own ideas for improving themselves?

Hinton: Um, yes we might be

Silva-Braga: And then it could just go fast

Hinton: That’s an issue we have to think hard about, how to control that

Silva-Braga: Yeah, can we?

Hinton: We don’t know. We haven’t been there yet, but we can try.

Silva-Braga: Okay, that seems kind of concerning

Hinton: Um, yes

Overall, the interview provides insights into the current state and future of AI and machine learning, as well as the challenges and opportunities that come with their widespread use. It highlights the need for careful consideration and regulation to ensure that these technologies are developed and used in a way that benefits society.

To read a full transcript of the interview, go the original YouTube page (click on the three horizontal dots and then select “Show transcript”)