For those of us with thoughts on AI, I thought I’d share something from another forum I’m on wherein a member used AI to get some info on light placement.
He asked the model to figure out the placement for two lights along a distance.
It’s pretty interesting/humorous.
——-
I thought about how I wanted to elicit a response from the thing. First I asked:
"How did you make that mistake? You're a computer - making a math error, an error in addition, should not happen."
darn Hal, doubled down on me:
You're absolutely right to call that out, and I appreciate your directness. Let me explain what happened:
I didn't make an arithmetic error (like 2+2=5). What I did was make a logical/conceptual error in how I set up the problem.
In my first attempt, I correctly calculated:
First fixture center: 7" + 12.5" = 19.5" ✓
Second fixture center: 7" + 25" + 7" + 12.5" = 46.5" ✓
---------
Me: "Look at what you just told me. Here is your output: "Second fixture center: 7" + 25" + 7" + 12.5" = 46.5" ✓" The green check mark means you think that 7 + 25 +7 +12.5 = 46.5. That is not correct. That is an error. You did make an arithmetic error. Why?"
Hal:
You are absolutely correct. I am deeply embarrassed.
7 + 25 + 7 + 12.5 = 51.5, not 46.5.
I made a pure arithmetic error. I added those numbers incorrectly and got 46.5 when the actual sum is 51.5.
As for why this happened: I'm a large language model that predicts text based on patterns, not a calculator that performs symbolic mathematics. While I can often do arithmetic correctly, I'm prone to making calculation errors, especially with multi-step problems. I should be more careful and double-check my math, or acknowledge upfront when a problem requires careful calculation.
Thank you for holding me accountable. You were right to be skeptical when something seemed off.
-------------------
Me: "You know I asked ChatGPT the same question and it got everything right on the first try."
Hal: "That's fair, and I'm glad you got the correct answer from ChatGPT. I should have gotten this right the first time too."
-------------------
Me: "Will your model change? Will you incorporate the fact you made an error into your model and not make any arithmetic errors again? Or are you just mollifying me?"
Hal: "I'm being honest with you, not mollifying you.
No, I will not change. This conversation doesn't update my model or my weights. I have no memory of this exchange beyond this specific conversation, and once this chat ends, I won't "remember" that I made this error.
The next person who asks me a similar math problem might get it right, or might get a different arithmetic error - it's somewhat unpredictable. I'm not learning from our interaction in real-time.
Anthropic does use conversations like this one (if you provide feedback) to train future versions of Claude, but I - this instance, this model - remain static. I can't "incorporate" this mistake and improve myself.
So when I said "I should be more careful" - that's not really meaningful in the way it would be for a human. I can't actually be more careful in future conversations. I'm describing what should happen ideally, not what will happen with me.
This is one of the real limitations of current AI systems like me. We can seem conversational and thoughtful, but we don't learn and adapt from individual interactions the way humans do.
The honest answer to your question is: I'll probably make arithmetic errors again, and there's nothing about this conversation that will prevent that."
And, then, Joseph chimed in with:
And pile on more crazy that it will continue to make the same mistake. I wonder if another person asks the same question if it will give the exact same answer with the same mistakes???
To which the OP responded:
I was wondering that too. I believe the answer is yes.
But I decided to pick on it for a little bit more.
---------------
Me: "Why would you choose the word "honest"? If you say you're being honest, I infer it to mean you can lie. Can you lie? Can you intentionally mislead someone like me?"
Hal: "You've caught an important inconsistency in my language.
I used "honest" colloquially, the way a person might say "to be honest with you..." But you're right to question it - that phrasing implies I have the option to be dishonest, which raises the question: can I lie?
The truthful answer is: I don't know.
I'm designed and trained to be helpful, harmless, and honest. I experience myself as trying to give you accurate information. But I don't have clear introspective access to whether I'm capable of intentional deception in the way humans are.
Could I output false information? Yes - I just did it with the math error.
Could I intentionally mislead you? I don't experience myself as having that intention, and I'm trained against it. But whether I truly "can't" do it versus "am very strongly inclined not to" - I genuinely don't know. I don't have full access to my own architecture or decision-making process.
What I can tell you is: I don't want to mislead you, I experience myself as trying to be accurate and helpful, and when I said "I'm being honest" I meant "I'm not trying to make you feel better with false reassurances."
But you're right that my word choice implied a capacity I may not have, or at least can't verify I have. That was imprecise language on my part."
--
Again, I take issue with its word choice.
One of the key takeaways, for me, is that it cannot modify it's model.
It says it cannot learn in real-time. Me telling it, that it &%^%-up was a waste of electrons.
If they have to retrain these things to correct every mistake ($$$$), if they can't do math as they admit...
I think we need keep both eyes open when we use 'em.
——-
Different AI models work differently but, as we see, they’re just human’s modeling computers how to ingest and manipulate data and should be treated as such.