A ChatGPT tale from the trenches

Started by ergophobe, September 12, 2025, 07:49:02 PM

Previous topic - Next topic

ergophobe

For the last 10 days I have been helping my nephew through a surgery and recovery. He had jaw surgery and so is unable to speak.

He installed text to speech software that worked with Google Docs, but it was annoying. You had to type, then select the text to read then click a little icon. Half the time he would be trying to tell me something and instead of reading what he had written, it would say, "Untitled Document Google Docs" (in other words, it would read the window title, no doubt because it had focus)

As a joke, sometimes when he would ask me a question, I would just respond "Untitled Document Google Docs"

Anyway, he's a software engineer by profession, but not in a condition to code something. But he was in condition to go the ChatGPT 5 and type

"Make me either a native Windows app or a Chrome package that has an area to type text and will do text to speech conversion when I type CTRL-Enter."

And literally in the time I was... I don't recall but let's say going to the bathroom and brushing my teeth, he had a fully functional text to speech app that immediately read any text in the window as soon as he typed CTRL-Enter.

I thought to myself that five years ago best case this would be a $500-$1000 job on Upwork that would take hours of back and forth and days for delivery and worst case it would be a $5000 to $10000 and take a business case document, budgetary approval and a couple months to deliver using the contract developers my old employer used.

And then the AHA moment - AI doesn't necessarily have to compete on price, it can also compete on convenience.

Yes, of course, in this case, he would not have paid thousands of dollars for that. But I realized that even if you pit a fully-human dev process that costs $500 and takes two weeks against a "human in the loop" process that costs $1000 and takes two hours, the AI-driven process still wins in many many corporate use cases.

See: The Tyranny of Convenience
https://th3core.com/chat/index.php?topic=8468

BTW, I think I mentioned this, but he also said AI is radically changing the hiring process. First, companies including his realized that the dreaded whiteboard interview was absurd because no developer worked that way. They would do Google searches and go to Stack Overflow for existing solutions, do searches on the documentation and so forth instead of starting from scratch with knowledge they held in their heads.

So they switched to a project-based interview. Essentially, if you got far enough in the process, they would hire you as a contractor for a few days and have you do an actual task that one of their experienced engineers could do in a day. But then about 18 months ago, one of their engineers showed that using ChatGPT and no programming knowledge, he could do the task in a few hours at a level that would somewhat exceed their normal junior engineer hiring threshold.

A few months ago, they re-ran the experiment and found that they could solve the problem from start to deployment in 7 minutes using a single prompt and no refinement prompts and that the output was what they considered to meet the senior engineer standard.

rcjordan


ergophobe

This generated a pretty much flawless raw package for use in Chrome.

-------

Another tale from the trenches, this one in the New York Times "The Ethicist" column (bolding is mine):

QuoteI volunteer with our local historical society, which awards a $1,000 scholarship each year to two high school students who submit essays about a meaningful experience with a historical site. This year, our committee noticed a huge improvement in the quality of the students' essays, and only after announcing the winners did we realize that one of them, along with other students, had almost certainly used artificial intelligence. What to do?

https://www.nytimes.com/2025/09/10/magazine/essay-contest-ai-ethics.html

BTW, the ethicist's take, which I agree with: "What happened this year should be taken as a wake-up call, rather than a crime scene." Given integration of MS Co-Pilot into Word, broad use of Grammarly and so forth, you simply have to expect that every essay submitted will have been aided by the use of some level of AI assistance.

The cat doesn't even remember the bag if she's under 22

grnidone

I saw a thing somewhere where an English professor, instead of disallowing ChatGBT, encouraged his students to use it and HOW to use it well.

He would then have the kids look at the essays that were written and determine IF the bot made its point and work to make the essay better via edit rather than have them start with a blank page.

I thought it was kind of genius: they will use ChatGBT already, so embrace it.

ergophobe

Perhaps you're thinking of this?
https://www.nytimes.com/2025/08/06/opinion/humanities-college-ai.html

Not super uncommon. A friend of ours who teaches poetry and writing lets her students do as they wish provided they answer three questions

 - where did I use AI?
 - what skills did I outsource to the AI?
 - are those skills that I am trying to develop in myself?

She recognizes that all of them will and should use AI, but she wants them to outsource the skills they don't care about. Spelling is an obvious one. It is a modern obsession that the greatest minds in history prior to the 17th century didn't worry about at all - Shakespeare and Rabelais would both be judged poor spellers today and, in the case of Shakespeare, the compositors who set the type for the famous First Folio Edition commonly changed spellings just to make things fit correctly on the page (common practice in the 16th and early 17th century).

But imagination might be a skill they do no want to outsource if they hope to be able to express the stories in their minds rather than, say, just write term papers for a grade.

rcjordan

2 links:

A study found that greater lifetime GPS experience, based on GPS habits and reliance on GPS in various navigation situations, correlated with worse spatial performance. In addition, greater GPS habits were tied to lower cognitive mapping abilities and less dependence on spatial strategies associated with the hippocampus.

Rethinking GPS navigation: creating cognitive maps through auditory clues - PMC
https://pmc.ncbi.nlm.nih.gov/articles/PMC8032695/

====

Does ChatGPT harm critical thinking abilities? A new study from researchers at MIT's Media Lab has returned some concerning results.

ChatGPT's Impact On Our Brains According to an MIT Study | TIME
https://time.com/7295195/ai-chatgpt-google-learning-school/

buckworks

>> reliance on GPS in various navigation situations, correlated with worse spatial performance

My theory would be that it correlates because people with less of a spatial sense would be more likely to use GPS.

ergophobe

#7
>> people with less of a spatial sense

That may be part of it, but my N=1 experience is that the more I use a GPS the worse my navigation skills. Of course, I have to filter that through the fact that spatial performance tends to decline with age, so there is a major confounding variable.

My land navigation skills were... much appreciated, I'll say. I remember before I had a GPS, we were in a group of 4 wandering off-trail trying to get back to the car. It was important to get it relatively close because we were navigating to the apex of a turn in the road and if you missed that, you might go several many miles without hitting another road. Two friends were in the woods trying to get a GPS signal for what seemed like 10 minutes. I was growing impatient and tired after a long day. I looked at Theresa and said, "You can do what you want, but the car is RIGHT THERE" and pointed and started walking. She looked at me, looked at the guys with the GPS and decided to follow me. 20 minutes later we emerged from the woods 100 feet from the car.

I could regularly do that. The flip side is, once, when my brother got hurt, I got flustered and got turned 180 degrees when it really mattered. I got into "hurry" mode and became what I'll call "land blind." Because my sense of direction was usually so good, everyone in the group fell in and followed me even though everyone else thought I was wrong, costing us about 10 minutes of valuable time before others spoke up. So they were not all success stories (and that one was a powerful lesson for us all).

Now, I carry a GPS on my phone and find I quite often get turned around in dense forest and have to consult it. The more I navigate with the GPS, the worse I get. I've taken to avoiding it sometimes just to not completely lose the ability and I can usually navigate okay, but just not like I could before I had ever used a GPS.

I've seen this with some friends who have become really dependent on GPS and have watched their intuitive land navigation skills atrophy.

On the other hand, a friend and I were out for 26-mile run this spring and we ended up on a trail that deadended and we realized we had lost the main trail. In the old days, we would have backtracked a half mile, adding a full mile to an already long day. I pulled out my phone with the GaiaGPS app and saw we were 200 feet from the official trail. A quick XC detour and we were back on track. We also had a couple of times we lost the trail in the snow and again, rather than gross navigation like we would have done with map and compass, we could recenter on the trail within just a few feet, confidently knowing it was close by under the snow. The same happened on another long run through a burn area a couple weeks ago. There was about a mile where the whitethorn regrowth had completely overtaken the trail and the GPS saved us a lot of hassle.

So on balance, I love having the GaiaGPS app, but I do think my directional intuition has gotten a lot worse. And I think a lot of that is because in situations like the ones mentioned above, I would have solved it through trial and error in the past, and that would have been really useful practice. Now I don't get that constant practice.