Office Address

UTXO Solutions
Richard-Wagner-Str. 31
50674 Köln

Phone

+49 221 1684 1669

Questioning artificial intelligence?

Questioning artificial intelligence?

When we look at the way AI is used today, there is one major flaw: No one asks enough questions. Not the user and not the AI. I have been observing, how people use AI for quite some time now. ChatGPT and others made huge breakthroughs in accessibility of conversational AI. As a result, many new ways of using it came up within the last year. A classical result of democratizing a breakthrough technology. Everyone plays around with it. Builds with it. And finds his or her own style. However, there seem to be mainly four ways, how people use conversational AI:

  • Structuring data
  • Answering complicated yet not complex questions
  • Creating content (pictures, videos, text, code)
  • Automation

Of course, these are not all the AI usecases. But these are the major ones that ChatGPT and similar models made available to so many people. These are the ones where unskilled users feel like they can work with it. That don’t require whole teams of developers and data scientists to build use cases. Because you just open ChatGPT or Midjourney or one of the many others and start with your idea. Things you want to do are usually a more specific version of:

  • Give me information from a source of data
  • Restructure this data for a specific usecase 
  • Create more content based of the data / instruction I give you
  • Write some code or automation for me. 

And they all have something in common: They are human directed.

Telling the AI what to do!

We use AI to enhance our capabilities. To do things, that we want it to do. And then we hope it does them as we want. And if not, we iterate. However, this usage pattern is mostly a one way street. The basic interaction is always the same: The user tells the AI what to do. The AI will try to do it. The outcome is based on the AI capabilities and the quality of the instruction. That’s where the idea of prompt engineering comes from. Give the AI a good instruction, so it will do as you tell it to. But this is flawed.

Imagine you are telling someone to repair your car. Most likely, the answer would be: “I can’t do that.” Maybe, if he is friendly, he will answer: “I’m sorry, I am just a random dude and can’t do that. But if you want to, I can tell you, where the next car repair shop is.” Now you could tell him, how to do it. All at once. Like a manual he could read once. He probably would still make mistakes, as he can’t hold all the information in his head at once. But at least some things would work. You could also split the big task into many smaller tasks. Analyze what’s wrong by checking how things are and how they should be. Opening the engine. Taking it apart. Replacing the broken part. Putting it back together. Or even smaller parts until he could manage well enough. And then quality check the results all the way at every step. And if something doesn’t work, just repeat the last step. 

This is basically, how most people use ChatGPT today. Write a post for LinkedIn. Make it longer. Make it more engaging. Add a statistic about innovation failure rate. Make it sound like this or that famous person. Very commanding. Do as I tell you. 

But this limits the capabilities of the AI. It’s a bit like hiring great people. You don’t want to hire great people and tell them, what to do. You hire them, so they can tell you what to do. 

Asking the AI what to do?

The better way would be, to ask first. I do this when I want to create pictures of abstract concepts. I don’t say: Draw success. I ask: How could I visualize success? How has success been visualized in a certain context? What visual elements do people associate with success? And then I rethink what I want from it in my head and go from there. 

You get a joint thought process, that uncovers your own thought. It’s like taking a text, feeding it to the AI and asking: What am I missing? Where am I wrong?

AI Example

As an example, I fed the text up to this point into ChatGPT and asked it, where the text is wrong. Warning: This goes beyond the usual 5 minute reading time I aim for in this the blog, as ChatGPT is rather talkative. However, I also wanted to keep the output as uncorrected as possible for showcase reasons. So here is what I got:

ChatGPT:

**“**Your text raises several important points about the use of artificial intelligence and how people interact with AI systems like ChatGPT. It is well-structured and introduces a thoughtful perspective on the potential for a more interactive and reciprocal relationship between users and AI. However, there are a few areas where the text could be improved for clarity, accuracy, and coherence:

Ambiguity in the Main Argument: The text suggests that the main flaw in AI usage is a lack of questioning, both from users and the AI itself. It could benefit from a clearer explanation of why questioning is essential and how it could enhance AI’s utility and effectiveness.

Assumption of User Expertise: The text implies that all users are capable of engaging in prompt engineering or can easily learn to do so. This may not be accurate for all users, especially those who are not tech-savvy. Highlighting the diversity of users and their varying levels of expertise could enrich the discussion.

Generalization of AI Use Cases: While the text lists four major use cases (structuring data, answering questions, creating content, and automation), it might oversimplify the vast potential of AI applications. Expanding on the diversity of AI capabilities and exploring examples beyond conversational AI could provide a more comprehensive view.

Comparison with Human Workers: The analogy of hiring great people and not just telling them what to do, but rather being guided by them, is compelling. However, equating AI to human workers might oversimplify the complexities of AI decision-making and ethical considerations. Clarifying the differences between AI suggestions and human expert advice would be beneficial.

Mechanics and Style: There are a few stylistic and grammatical areas that could be refined:

Consistency in Terminology: Ensure consistency in the use of terms like “AI,” “ChatGPT,” and “conversational AI.” Define these terms early in the text to avoid confusion.

Punctuation and Grammar: Review the text for punctuation errors and grammatical inconsistencies, such as comma usage and sentence structure.

Clarity and Conciseness: Some sentences could be made more concise or split into two for better readability.

Engagement with AI’s Limitations and Ethical Considerations: While the text touches on the interaction model with AI, it could also benefit from a discussion on the ethical implications of AI’s suggestions and the importance of human oversight in AI’s decision-making process.

Evidence and Examples: Providing specific examples or case studies of successful questioning strategies with AI could strengthen the argument. This would illustrate how asking the right questions can lead to more innovative and effective uses of AI.

Overall, your text provides a thought-provoking take on how we currently interact with AI and suggests a paradigm shift towards a more collaborative and questioning approach. Addressing these areas could make your argument more robust and compelling.

This is way better, then if I would have just told it to optimize the text. Now, given it’s own information, I can optimize the text based on it’s own feedback. This would look like the following:

ChatGPT

**“**Rethinking Interaction with Artificial Intelligence

As we navigate the evolving landscape of artificial intelligence (AI), a critical oversight emerges: the scarcity of inquiries posed by both users and the AI itself. This observation prompts a deeper examination of our engagement with AI technologies, such as ChatGPT, which have significantly democratized access to conversational AI. This democratization has spurred innovative applications, yet the dialogue between humans and AI often remains unidirectional.

Traditionally, AI applications have been categorized into four primary uses: data organization, solving specific but straightforward questions, generating creative content (e.g., images, videos, texts, code), and automating tasks. These categories represent the surface of AI’s potential, accessible even to those with minimal technical expertise. Users initiate conversations with AI with a clear objective, relying on the AI to process and respond based on the provided directives. This interaction, while effective, limits the AI to a role akin to a digital assistant rather than a collaborative partner.

The conventional approach—directing the AI to perform tasks—parallels instructing someone without the requisite skills to repair a car. The instructions might be detailed, but without understanding or the capacity to retain complex information, errors are inevitable. Instead, breaking down tasks into manageable steps can lead to more effective outcomes, mirroring how we currently leverage AI for tasks like writing engaging content or automating repetitive processes.

However, this method constrains AI’s potential by not fully utilizing its capabilities. The analogy extends to the workplace; we hire talented individuals not merely for them to follow orders, but to leverage their expertise and insights. Similarly, engaging with AI should involve seeking its suggestions, fostering a two-way exchange of ideas.

A more fruitful approach involves posing questions to AI: rather than commanding “Draw success,” inquire “How can success be visualized?” This shift prompts a collaborative thought process, uncovering new perspectives and ideas. It transforms the interaction from a transactional command to a dialogue, enriching the creative process and expanding the potential applications of AI.

To address the initial critique, our engagement with AI must evolve. By emphasizing inquiry over instruction, we can unlock a richer, more dynamic partnership with AI technologies. This requires not only a technical shift but also a cultural one, where curiosity and collaboration guide our interactions with AI. By fostering an environment where asking “What am I missing?” becomes as natural as giving an order, we can enhance the symbiosis between human intelligence and artificial intelligence, leading to unprecedented levels of innovation and creativity.

In summary, the path forward lies not in directing AI but in dialoguing with it. By embracing a more questioning approach, we encourage a richer interaction that leverages AI’s full potential, leading to more innovative and effective solutions. This paradigm shift, from seeing AI as a tool to viewing it as a collaborator, can redefine our relationship with technology and expand the boundaries of what we can achieve together.

However, as you can see, it mostly did not fix the problems it critiqued. There is no additional example, no added nuance for human and AI decision making. The AI use cases are not enhanced or the argument checked for other use cases. It did not stick to my writing style either. The text, all in all, is just rewritten in an AI text style. 

The thing is, that conversational AI like ChatGPT doesn’t question itself very well. When I asked it, how satisfied it was with the correction, it was pretty happy, regardless of whether I asked for how well it did or where it failed. 

So we need to question it. But even better than that, we can use AI to question us. A great technique when working with AI is to let the AI give you the questions. Then you give the questions to the AI. Then you ask it to check for missing elements in the result. And revise the results yourself. Making it a dialogue instead of order after order.This way, you can leverage AI beyond being just an assistant to do things. 

Getting beyond asking questions

To me, there is one more level to that. This level is AI questioning us on its own. It is kind of basic in human interaction. If you want to meet with a coworker, you would say: schedule us a meeting. The typical AI behavior would be to just schedule a meeting. A coworker would identify the missing piece of information: “When do you have time?“. I currently don’t know even one AI model that asks you for the missing information proactively. 

If you tell an AI to draw you a picture of a lake, it just gives you a picture of a lake. It doesn’t ask, you, what’s important for you in this picture. Do you have a certain style in mind? What do you want to use it for? What’s the size or aspect ratio? Is there another picture of a lake, that could act as a reference?

The AI is basically building a product without understanding the customer first. 

Sure, there is a big data dimension to it. A dimension of optimizing for user acceptance. Just like the like button on Social Media shapes the content in general. Specialized AI takes data into account. Google gives you different search results based on the knowledge they have about you. But ChatGPT doesn’t ask me, if the picture I want to create is for my blog and should have a similar style as the others. When I tell it to retry drawing an image, it doesn’t ask me, what I did not like. I have to tell it how to do it and course correct on my own. 

One day though, it will. AI will learn to ask for missing information. And it will be another breakthrough in output quality. But it will probably require some form of dangerous self awareness of AI. 

PS: When I created the image for this post, it gave me two images and asked me, which one I preferred. However, after selecting one, it failed to ask me why. Maybe AI developers are to caught up in statistical thinking.

Autor

Martin Betz

Co-Founder UTXO Solutions