Using Chat GPT


(Bob M) #1

I’ve started using Chat GPT. We plan to go to Kingston, RI (USA) this summer. Here is a query:

Could you tell me of restaurants in the Kingston, RI area that have menus suitable for someone on a ketogenic or low carbohydrate diet?

Here is the answer:

While there are some negatives (egg white omelets?), the list is not bad and much better than I could have done, considering it took all of 30 seconds. Or, put differently, I could spend a long time on search engines, Yelp, etc., and probably not get this list.

Makes me wonder what other questions I could ask it? “Could you tell me the best arguments against the idea that LDL causes heart disease?”; "“Could you tell me the best arguments for the idea that LDL causes heart disease?”; “Could you tell me what ketone levels are recommended for people on a ketogenic diet?”; “What about for people who have a condition, such as depression?”; “What are the best papers I can read on trials of the ketogenic diet for depression?”.

We had friends over, and one of them was using Chat GPT to explore different options in her work. It’s really amazing.

It’s not always great. I asked Chat GPT for a program for Word to perform a certain function. The code generated had a nice structure, but did not really work. I was able to ask at a location with people and get a better version that worked.

However, the person who was using Chat GPT kept asking questions, to get Chat GPT to tailor its output to certain situations. I did not know you could do that when I had asked about coding, and perhaps had I said something like, “That code has a nice structure, but does not work for my application because of X. Could you improve the code to overcome this limitation?”, maybe it would have worked.


(You've tried everything else; why not try bacon?) #2

Here’s an interesting analysis of what is going on:


(Bob M) #3

Paul, have you tried it yet? (Since that’s a video, I’ll see if I can look at it this weekend.)

My wife’s brother’s wife used it for her Methodist preacher friend. She asked Chat GPT something along the lines of “Create a sermon for how music in the Methodist faith can be used to help overcome the damage caused by gun violence”. Its answer was amazing.

She also works at a college where they are looking at different (verbal) logos. She passed each one through Chat CPT, asking it the benefits of each logo, then the detriments. It provided great benefits and detriments.

In my own field, I asked it to provide technical background for a technology area with which I was not familiar. I got a GREAT 3-paragraph introduction for this specific question. The type of thing that would take a good 10+ minutes or maybe 30+ to get by searching, as with searching you still have to find relevant information.

I can’t remember a single technology that made my draw drop. But this did. I was stunned at how quickly it came up with an answer to a question that was so esoteric.

And the fact that you can go on vacation to a new location and ask it questions and get answers is so helpful. We asked it for an itinerary for a family with teens and dogs and gluten free options in a part of RI with which we are not familiar, and it gave us great ideas. Then we asked for rain/weather ideas, ideas for live music or performances, and build those into the itinerary, and it did.

Stunning, really.


(Joey) #4

Thanks for sharing.

This is truly dazzling technology. Like all tools, it remains morally neutral such that its effects on society for better or worse depend entirely on what humans wind up doing with it. From the historical perspective, this feels rather unnerving to say the least.

Still, getting me interested in setting up an account. :thinking:


#5

Getting a low carb restaurant list seems like a really nice use of it! A guy at work mentioned that he asked it to plan a meal after giving it a list of ingredients in his fridge, and he said it turned out great. Another coworker uses it to write bedtime stories given a few prompts from his daughter. I have played around with it to see how well it could write some simple code. The code it generates can sometimes be a little buggy, but it is amazing what it can do. It is definitely going to change the way we work.


#6

It is not neutral at all. It’s moral is based on the data it uses to learn. The data is human generated and therefore inherently contains moral. The data is also picked by humans which adds another layer of bias. On top of that Microsoft/OpenAI heavily censor ChatGPT. If you think it is amoral try to ask it how to 3d print a gun or something like that.


(Joey) #7

We’re talking about different ”it”s.

I’m saying that artificial technology is morally neutral, just like any/every other tool. Cars, computers, chainsaws, coffee makers, spreadsheets… All morally neutral.

You’re saying that ChatGPT is biased. I’ve got no doubt that ChatGPT is biased.

But that doesn’t mean that AI technology is inherently moral or immoral. It’s neutral in its essence. Again, I believe we’re not talking about the same ”it.” No?

Data is also morally neutral regardless of its source. Whether bias=immoral is a whole other discussion. :vulcan_salute:


(Michael) #8

Here is a sample of health questions


#9

You could have easily accomplished the same using Google.


#10

I don’t think that you can compare AI to a tool like a car or a computer. Artificial intelligence, at least the kind of intelligence that is based on language models, is based on data that is created by moral beings. There is no way that it can be morally neutral. You cannot even compile the data so that it is morally neutral because a humans morality always plays a role in picking the data. It is not just a tool but it always reflects the morality within the training data.

So separating the technology - which is ultimately the code - from the data that the code is running on does not make sense, since the code is useless without the data. To refer to your comparison to other tools: there is no point in a car without it’s wheels. A discussion about the amount of a cars exhaust fumes or the dangers to pedestrians is pointless when it can’t drive.


(Joey) #11

@dr_wtf Looks like we’re still farther apart than I imagined.

I agree that, qualitatively, AI is a more advanced technology than prior inventions. And like all new things, that exposes us to significant uncertainty - which makes it scary.

Frankly, I expect it will likely be used for (what I personally would consider) immoral purposes. So, I guess that makes me biased. :wink:

But with all due respect, I find none of the reasons you’ve cited concluding that this latest invention is somehow trapped by human morality to be particularly compelling. (Especially as compared to prior inventions which you seem to believe were not?)

Printing presses, cars, radio, television, splitting atoms, computers, robotics … each was vilified upon its arrival as a corrosive moral force that had to be suppressed in order to save society. And yes, each eventually transformed society. But those societal changes had to do with how humans used the technology - not the tools themselves.

A sharp knife is essential for preparing food. Also for stabbing people.

And so I firmly believe that computer-based advanced logic and learning circuits (i.e., “AI”) are inherently neutral as a technology construct.

What people choose to do with AI - including how we populate inputs and act upon outputs - will determine the moral effects.

While it appears we’re still talking about two different conceptual constructs, let’s try to agree on this common ground: We are both anxious about how this new technology will be deployed and remain fearful of its potential for immoral applications.

To my mind, that’s a reflection of humanity. Technology is not immoral. :computer:


(Kirk Wolak) #12

I’ve spent a few days (8hrs/day) type days testing ChatGPT. When you realize it is a language parsing/rating prediction system it’s almost truly amazing (but realizing that IQ is actually a measure of pattern recognition… And for proof, take a look at most mensa tests… Complex patters, predict the next pattern).

Anyways, it is truly amazing, and will replace Google Search as the premier search (mostly because Google has been so seriously Broken by hiding many truths, and promoting pure woke nonsense)

Unfortunately, while it’s a great place to learn, it can be guilty of Bro Science. I like to test it at the EDGE of my competency to expand my horizons. But on Global Warming, for example, it has been pre-programmed to look past facts. But if you ask it enough questions, you can get it to finally admit things like (Mars has MUCH higher CO2 levels than the Earth and is much Colder, or “There are MANY factors in Warming and CO2 is only one of them, and at current levels it’s hard to say how much of a role it plays, and even how much could be tied to human activities”).

If you work with it long enough, and force it to find your arguments, it eventually gets there.

As for wrong answers. Plenty of them. I’ve asked it to write complex code (the algorithm for building and searching a btree). While it gets the “structure” correct, the code is horrendous. After 3hrs of failures, I gave up trying to get it to work. [Again, this is at the “edge” of my knowledge, it’s been 35yrs since I looked at this in University.]

Later I learned that it’s best if you can reference specific information (like Sedgewick for algorithms, etc. Because if it has read the material, it will leverage those references)

A cool trick. A good friend uses ChatGPT to write PROMPTS for other AIs, like the ones that can create images. “describe a drawing that my 3yo girl would like”
He pasted the response into “Midjourney” (a drawing AI), and got this:

Simply Stunning on both sides.
Unfortunately the ONE thing ChatGPT and language processing systems, in general, cannot do… Is validate their answers.
Finally, it truly bothers me that they are already doused with politics.
Free, Open, Honest Speech must be encouraged. Science should not care about feelings.
The only way to know the truth is to let the light shine on all of the arguments.


(KM) #13

Probably my biggest misgiving about AI. We all know that Google’s an idiot, it’s obvious. What happens when we stop knowing that AI might be wrong?


#14

Let AI chat to itself if it’s that smart…


(Joey) #15

A fundamental point of confusion about AI is that, no, it is not armed with facts. It is armed with material scraped from online sources. Uh-oh.

The internet is notorious for being filled with bad science, politics (both woke and neo-fascist), historical error, and the full complement of human frailty and stupidity. “Alternate facts” abound.

This toxic mix then becomes the starting point for “artificial intelligence” and “machine learning.”

Why should anyone be surprised that it appears to be biased, withholding truths that we hold dear, and seemingly disconnected from anything approaching what we believe to be perfect intelligence.

It’s technology … while morally neutral in and of itself, its inputs and outputs simply reflect what we (as moral actors) choose to feed it and choose to act upon when it spits out a result.

Put differently: Since the internet is where facts and opinion collide, why are we surprised by AI’s shortcomings?


(Bob M) #16

I’ve been using Chat GPT to do some Word VBS programming. Does it make mistakes? Sure. But I asked it why it used a certain structure at one point and a different structure at a different point, and the answer it gave me was great and took seconds. Could I have found that answer somewhere else? After a ton of searching, yes. But I found it in seconds.

Now, I use boards where I post a question, and helpful people answer. That does work and is great. But I can use Chat GPT 24/7/365, and can ask it questions at 6am on Saturday (when I normally do this programming).

And as long as I keep my questions short and narrowly tailored to answer small questions, I get good answers.


(Joey) #17

This sounds like an ideal use case. Since there are few if any heated politically-infused opinions about VBS programming, Chat GPT is highly effective in helping you - and unlikely to ask to meet you in a dark alley. :+1:

(Global warming, gun control, US presidents, … much less so.)


#18

OK Bob, I actually get you and agree with you in a way. I really do.

But I still prefer humans.
And dogs.


#19

And I know that you do too.

Carly, as a wee pupper!


(Joey) #20

Artificial Pet:

image