• The new WDWMAGIC iOS app is here!
    Stay up to date with the latest Disney news, photos, and discussions right from your iPhone. The app is free to download and gives you quick access to news articles, forums, photo galleries, park hours, weather and Lightning Lane pricing. Learn More
  • Welcome to the WDWMAGIC.COM Forums!
    Please take a look around, and feel free to sign up and join the community.

Disney making $1 billion investment in OpenAI, will allow characters on Sora AI video generator

HauntedPirate

Park nostalgist
Premium Member
I’ve been following this discussion, and I get the sense that many of you don’t have a great understanding of what AI does or how it works.

Refusing to use AI is like refusing to use Google.

Most of us have seen the results of bad AI text, image, and video output. That’s not all it’s capable of.

Most of you who are against AI are using it way more than you might think (and in ways that have likely benefited you): recommendations, social media algorithms, spell check, search results, YouTube videos, fraud detection, etc. are largely built on AI now.
Or...

We're acutely aware of what it is and is not, because we actually do use it and see the output of others using it as well.

Here's a great example of what "AI" can do:

A manager creates a weekly 30 minute status meeting for a team of 8 people. Said manager uses AI to create a boilerplate meeting agenda, broken down in to 5-7 sections. The minimum estimated times dedicated to those sections add up to... 34 minutes. Not including the meeting wrap-up.
 

Disstevefan1

Well-Known Member
Just my opinion, but unfortunately, right now, it’s get on the AI train or get left behind. That’s just the way it is. In my daily use of AI, it does make you work fast but, it also takes away a lot of your control and can make folks “think less” and let AI do the work. The first rule when working with AI is, “never trust AI” and always check the work being done. If us humans can keep checking AIs work, we may be ok, but if we begin to rely on AI with no humans checking, bad things will happen.

We see the lack of human oversight already in AI art that clearly has flaws but make it out in products. This is a harmless example, but was a result of not checking AIs work.

At least for now, humans are needed to tell AI what to do and check and correct it’s output whatever that output is; art, reports, articles, code etc.
 

_caleb

Well-Known Member
Or...

We're acutely aware of what it is and is not, because we actually do use it and see the output of others using it as well.

Here's a great example of what "AI" can do:

A manager creates a weekly 30 minute status meeting for a team of 8 people. Said manager uses AI to create a boilerplate meeting agenda, broken down in to 5-7 sections. The minimum estimated times dedicated to those sections add up to... 34 minutes. Not including the meeting wrap-up.
So because the manager doesn’t use it well, AI is bad?

This is what I was referring to in my post. Several here are judging AI based on the worst, laziest, and unethical uses of it.
 

HauntedPirate

Park nostalgist
Premium Member
So because the manager doesn’t use it well, AI is bad?

This is what I was referring to in my post. Several here are judging AI based on the worst, laziest, and unethical uses of it.




Pay particular attention to that last link.
 

flynnibus

Premium Member
I have someone in the medical field as well, and obviously they have opinions about how AI may or may not help, and reading your comments and your daughters experiences around that are good food for thought. I think there may be a medical use for AI, but using it for actual diagnostics would basically become the oft-used "Doctor Google" so many patients use to try and usurp the knowledge of doctors and NPs. That starts down a dark path I'd rather not ponder yet.
And you'll notice I didn't suggest anything about having the AI do the diagnosis or replacing the doctor's interpretation :)

I think you will see it used first as the places doctors are already aided, audited, scrutinized, etc. Doctors already have their cases scrutinized by others like the insurance companies.. Doctors already offload a lot of the downstream tasks like transcription, records, delegating to other disciplines, having to accept automatic cross-checking by other parties, etc. Doctors already have electronic gates they must pass through before different things are blessed, etc. Many of those tasks started out as other practitioners or junior roles.. but have been steadily replaced with technology and algorithms with intervention by practitioners. AI will continue that trend and improve it. Hence what I meant about the alexa-like thing where monitoring and additional checking can actually be more automated and real-time.

You don't think a hospital or practice would be interested in having more checks preventing mistakes by staff?
 

flynnibus

Premium Member
Here's a great example of what "AI" can do:

And we simply go back to the fundamentals here..
There are two main things people need to embrace to deal with AI today
#1 - the person is always responsible for the output -- If someone passes along garbage, the fault is the human
#2 - you must be willing to experiment and practice

The fact your manager passed garbage through is a 'them' problem. Just the same as if he gave the task to an admin who made similar mistakes. The manager owns their own reputation -- failing to do that is their mistake.
 

_caleb

Well-Known Member



Pay particular attention to that last link.

I’m sure there is no shortage of examples of bad AI use. I’d encourage you to explore some of the better, more helpful uses.

I read the Psychology Today article, and I agree caution is in order. But the argument is similar to warnings against the use of machines, calculators, and computers: if the user doesn’t have experience in whatever came before the technology, they won’t develop the skill to evaluate its output.

Whether our research begins with a card catalog in a library or with a chatbot on a Macbook, the skill we have to develop is media literacy. And you can be media literate even if you aren’t a subject matter expert.
 

HauntedPirate

Park nostalgist
Premium Member
And we simply go back to the fundamentals here..


The fact your manager passed garbage through is a 'them' problem. Just the same as if he gave the task to an admin who made similar mistakes. The manager owns their own reputation -- failing to do that is their mistake.
What about the senior security engineers who were told to use AI to “improve the access security posture”, and when they input current conditional access settings and asked AI to find security problems and recommend solutions, the recommended solutions would take us backwards like 4 years?

AI needs hand holding, fact-checking… use whatever words you want. It’s just a tool and cannot replace human thinking. Depending on it is a fools errand.
 

HauntedPirate

Park nostalgist
Premium Member
And you'll notice I didn't suggest anything about having the AI do the diagnosis or replacing the doctor's interpretation :)

I think you will see it used first as the places doctors are already aided, audited, scrutinized, etc. Doctors already have their cases scrutinized by others like the insurance companies.. Doctors already offload a lot of the downstream tasks like transcription, records, delegating to other disciplines, having to accept automatic cross-checking by other parties, etc. Doctors already have electronic gates they must pass through before different things are blessed, etc. Many of those tasks started out as other practitioners or junior roles.. but have been steadily replaced with technology and algorithms with intervention by practitioners. AI will continue that trend and improve it. Hence what I meant about the alexa-like thing where monitoring and additional checking can actually be more automated and real-time.

You don't think a hospital or practice would be interested in having more checks preventing mistakes by staff?
If it can improve voice-to-text transcription, I know more than a few doctors who would welcome that! 😂 Some of the stuff I’ve heard about coming from transcription software is hilarious.
 

flynnibus

Premium Member
What about the senior security engineers who were told to use AI to “improve the access security posture”, and when they input current conditional access settings and asked AI to find security problems and recommend solutions, the recommended solutions would take us backwards like 4 years?
Then I'd say "congrats, that was step 1" - No go back and keep iterating or find out if there were better tools and models. No one told you to implement the output directly.

Using AI is not a one prompt process. Honestly this sounds like some guys that just did something to do say they did it and dismissed it.. instead of actually being motivated to see if there were gains to be had using the tooling.

This is the state of AI today - No single source is the best answer for everything. I think even in my 'tool for everyone' front-end for the mass audience in the company there are still like 8-9 different models available for them to pick from. There is still very much art in working on prompts, using the right models, and in advanced cases training your models with specific data to improve the results.

This is why many people flop with it, and the people that are actually doing the homework, are going to race past the people who sit with their arms crossed going "AI is all garbage". They aren't going to have the skillset to use the tools and are going to get passed over. In today's world, effective AI usage takes practice and experimentation.

And in a few years when they are all using the improved tools - they'll rewrite history and say "Well I didn't mean all AI was bad, it was just bad then!"


AI needs hand holding, fact-checking… use whatever words you want. It’s just a tool and cannot replace human thinking. Depending on it is a fools errand.
And you won't find people advocating for that with current AI. In fact, every AI advocate would tell you exactly what you said - VERIFY and rework.

Anyone who is walking around with stories of AI direct to market is just a shaman.
 
Last edited:

HauntedPirate

Park nostalgist
Premium Member
Then I'd say "congrats, that was step 1" - No go back and keep iterating or find out if there were better tools and models. No one told you to implement the output directly.

Using AI is not a one prompt process. Honestly this sounds like some guys that just did something to do say they did it and dismissed it.. instead of actually being motivated to see if there were gains to be had using the tooling.

This is the state of AI today - No single source is the best answer for everything. I think even in my 'tool for everyone' front-end for the mass audience in the company there are still like 8-9 different models available for them to pick from. There is still very much art in working on prompts, using the right models, and in advanced cases training your models with specific data to improve the results.

This is why many people flop with it, and the people that are actually doing the homework, are going to race past the people who sit with their arms crossed going "AI is all garbage". They aren't going to have the skillset to use the tools and are going to get passed over. In today's world, effective AI usage takes practice and experimentation.

And in a few years when they are all using the improved tools - they'll rewrite history and say "Well I didn't mean all AI was bad, it was just bad then!"



And you won't find people advocating for that with current AI. In fact, every AI advocate would tell you exactly what you said - VERIFY and rework.

Anyone who is walking around with stories of AI direct to market is just a shaman.
I can't speak for the other guys, but my personal experience has been just that - it takes 3-4 iterations of questions to get exactly what I'm looking for. Often I'll take the first response as a framework and modify it to do what I need done rather than sit and ask questions multiple times. Not because I want it to take longer doing it that way, but because I learn from making those modifications and adjustments. And then the next time I'm doing something similar, I've learned from the last bits of work, or I already have something fresh in my mind, or I have a solution I can use as a baseline for whatever else I'm working on. If it's just something I want to get done at some point, or am feeling lazy, I'll run through the gauntlet of questions to get my required output.

And yes, that manager will often simply accept whatever AI slop gets spit out the first time and run with it. Upper management eats it up. 🤦‍♂️
 

Disstevefan1

Well-Known Member



Pay particular attention to that last link.
Go to trade school kids!

Become a plumber or a electrician or a air condition a refrigeration tech....
 

_caleb

Well-Known Member
What about the senior security engineers who were told to use AI to “improve the access security posture”, and when they input current conditional access settings and asked AI to find security problems and recommend solutions, the recommended solutions would take us backwards like 4 years?

AI needs hand holding, fact-checking… use whatever words you want. It’s just a tool and cannot replace human thinking. Depending on it is a fools errand.

I don’t know anyone who is advocating for “draft one simple prompt and expect the results to be just what you need.” Anyone who does that is doing it wrong!

The way some folks here talk about AI sounds like: “I don’t use Google for web search. It’s super unreliable. I typed in ‘’car won’t start,’ and the first page of results didn’t even address the specific issues I’m having with my 2018 Honda Odyssey!”
 

Dranth

Well-Known Member
So because the manager doesn’t use it well, AI is bad?

This is what I was referring to in my post. Several here are judging AI based on the worst, laziest, and unethical uses of it.
Sure, AI can be used well. Trained on the right data, it can actually do wonders.

There are two big issues I have seen with folks I interact with.

First, most are only exposed to the general AI that is trained on or includes the bowls of the internet like twitter. That is when we get great things like an AI worshiping certain fascist dictators from WWII and producing inappropriate videos with minors.

Second, a lot of people recognize the absurd amount of damage that can be done with AI if not used correctly. They also don't believe we will have reasonable and thought out controls or if we did, that they would be reliably enforced.
 

_caleb

Well-Known Member
Sure, AI can be used well. Trained on the right data, it can actually do wonders.

There are two big issues I have seen with folks I interact with.

First, most are only exposed to the general AI that is trained on or includes the bowls of the internet like twitter. That is when we get great things like an AI worshiping certain fascist dictators from WWII and producing inappropriate videos with minors.

Second, a lot of people recognize the absurd amount of damage that can be done with AI if not used correctly. They also don't believe we will have reasonable and thought out controls or if we did, that they would be reliably enforced.

Again, that you can find a thousand ways it’s being used poorly isn’t a good argument against it completely.

As dumb as Sora deal may have been, at least Disney was trying to get ahead of things and find a way for fans to use their IP with AI in safer ways.

Most of the arguments against AI I’ve seen here are literally the same ones used against the printing press when it was first introduced.


ETA: Those concerns were warranted and prescient then and they are now. But in hindsight we can see the net benefit of the innovation.
 
Last edited:

flynnibus

Premium Member
I can't speak for the other guys, but my personal experience has been just that - it takes 3-4 iterations of questions to get exactly what I'm looking for.

I think one of the more non-intuitive things people struggle with is relearning to use natural language. For the last 30 years we've been learning how to craft our search queries to cope with how the indexes worked. We are terse, we focus on keywords, we don't have dialog, we don't try to frame up how the response should be presented.

Prompts right now are nothing like that. Putting in "google search-like" phrases will yield results, but often not really what people are after. Natural language, and key coaching like telling the agent the personas to use in their response, the format, the boundaries, etc signficantly improve responses. Learning how to do this is going to take re-education for the gen pop. And because most people just get exposed to the tools without much guidance.. they get a bunch of garbage out and get discouraged.. and usually give up or dismiss the whole thing.

AI right now takes a commitment - one that most people aren't willing. But until the tools get grandma proof - that extra work is what it takes to run the race. And for people in their jobs - it's going to start creating separation between employees and people aren't going to like the consequences of their inactions.
 

Casper Gutman

Well-Known Member
I’ve been following this discussion, and I get the sense that many of you don’t have a great understanding of what AI does or how it works.

Refusing to use AI is like refusing to use Google.

Most of us have seen the results of bad AI text, image, and video output. That’s not all it’s capable of.

Most of you who are against AI are using it way more than you might think (and in ways that have likely benefited you): recommendations, social media algorithms, spell check, search results, YouTube videos, fraud detection, etc. are largely built on AI now.
People know that AI is now used for the functions you listed because all of those functions have gotten much, much worse and less useful since AI was integrated into them.
 

Casper Gutman

Well-Known Member
I’m sure there is no shortage of examples of bad AI use. I’d encourage you to explore some of the better, more helpful uses.

I read the Psychology Today article, and I agree caution is in order. But the argument is similar to warnings against the use of machines, calculators, and computers: if the user doesn’t have experience in whatever came before the technology, they won’t develop the skill to evaluate its output.

Whether our research begins with a card catalog in a library or with a chatbot on a Macbook, the skill we have to develop is media literacy. And you can be media literate even if you aren’t a subject matter expert.
“Just a tool,” “card catalog,” etc. etc.

It’s just a bunch of catch phrases and cliches that people regurgitate without thinking critically about what they mean or whether they’re the slightest bit relevant.
 

Register on WDWMAGIC. This sidebar will go away, and you'll see fewer ads.

Back
Top Bottom