Ai future? | Page 3 | GTAMotorcycle.com

Ai future?

I don't think I am. Society in general has been moving in the right direction for a long time. Racism, mostly gone.
Slavery, mostly gone.
Gender inequality, mostly gone.
Religious intolerance, mostly gone.
Environmentalism, very active.
Materialism, getting better (I believe).
Historically, there has been no middle class, you were either wealthy, or on the brink of starvation every day. A middle class person now has a better life with more luxuries than royalty did 150 years ago. I think because everyone was so poor, that having 'stuff' was a show of wealth. And as society got wealthier, people wanted to show off their 'stuff'. I believe people are slowly starting to realize that: 1) no one gives a $#!+ about other people's possessions, leading to 2) working like an @$$#*!÷ to buy a bunch of crap that no one else cares about is a waste of time. In this regard, millennials are on the right track.
For the most part, it's demand that drives supply, and demand is changing to a less work oriented lifestyle. If AI can advance employment laziness further, people will use it for just that reason. I think the days of getting up to go to work, so that you can buy a car to get to work faster are quickly fading.

As for costs not going down, free market economics. If it wasn't for our gov't policies, pretty much everything would be getting cheaper year over year.

I think we are making good progress on some things and not others.

I’m a scientist and never used to be that much into art but many years ago an art historian pointed out how in paintings the aristocracy were always the chubby ones being painted. Their size was a direct link to their wealth and so being chubby meant a sign of their status in society. The same historian then pointed out that we have totally reversed that now to the point that it’s almost 100% the opposite and how wealth inequality/marketing to demographics/ agricultural practices/ education etc are all linked to that over the ages. I found that incredibly interesting.
 
AI is set to make us even stupider than we already are. I was browsing a post about someone hesitant about buying a bike with forward controls. What are they? So I looked them up on duck duck ho. The first four or five hits seem to have been written by AI. None of which gave a satisfactory answer, and one of which claimed that all sport bikes had forward controls.
 
AI is set to make us even stupider than we already are. I was browsing a post about someone hesitant about buying a bike with forward controls. What are they? So I looked them up on duck duck ho. The first four or five hits seem to have been written by AI. None of which gave a satisfactory answer, and one of which claimed that all sport bikes had forward controls.
Well, the throttle, front brake and clutch are all technically forward of the rider :)

So AI was not wrong, the context was....
 
Last edited:
These large language models are very prone to error it seems. I was trying to remember a movie name the other night, and ChatGPT had an "oversight" I asked it about:

Link to conversation: ChatGPT

TLDR - I asked which movies had a dog named "Maya". ChatGPT reported there aren't any. I googled and found the answer myself (8 below) and asked ChatGPT about it. It then apologized for the "oversight" and confirmed my answer. I asked about the oversight, and it basically said "my bad" :p

"In the case of "Eight Below," I didn't initially mention Maya because my response focused more broadly on the central plot and characters without delving into specific dog names. When you prompted me about Maya, I realized the omission and corrected it."
 
At the moment general Generative AI is a bigger threat to creative endeavours than to technical. On the technical side they are pulling info from both incorrect and correct sources so the answer can not be trusted. From a creative side it takes a large portion of human creativity, samples, and now creates something "new" from it. No one (hopefully) would use a GPT application pulling general data from the 'net to engineer a bridge, but they could use it to create a good looking bridge they can then use to engineer a proper one that takes those creative cues. It does not mean AI cannot do technical it just comes down to the accuracy of source data, crap in, crap out...Clutch, Front Brake and Throttle are controls and they are forward of the rider.
 
Here's an extraordinarily long and detailed dive into what LLMs are doing. The most easily digestible stuff is at the top: What Is ChatGPT Doing … and Why Does It Work?
what ChatGPT is always fundamentally trying to do is to produce a “reasonable continuation” of whatever text it’s got so far, where by “reasonable” we mean “what one might expect someone to write after seeing what people have written on billions of webpages, etc.”
what it’s essentially doing is just asking over and over again “given the text so far, what should the next word be?”—and each time adding a word.
if we always pick the highest-ranked word, we’ll typically get a very “flat” essay, that never seems to “show any creativity” (and even sometimes repeats word for word). But if sometimes (at random) we pick lower-ranked words, we get a “more interesting” essay.

So they're effectively like a toddler, stringing together random words based on sounds they have heard other people make, with no actual understanding of their meaning. They get better based on feedback, but the concepts of correctness or truth aren't really a factor at all in what they produce. There's no objective source of truth that they could reference even if they wanted to.

The core operating principle of guessing what the next word in the sentence should be (with some intentional randomness built in to produce creative answers) also makes them deeply unsuitable for tasks that they haven't been explicitly trained for, like math: ChatGPT cannot count words or produce word-count-limited text

To an untrained LLM, questions involving math are just more probability-based word-association exercises. But the layperson operating the LLM may not be aware of that, especially if the LLM has produced correct answers for similar questions in the past.

Math is pretty cut-and-dried in terms of correctness, but imagine slightly less objectively-verifiable things, like asking the LLM the distance between two locations. It's not going to go onto Google Maps and actually measure it, it's just going to do the probability-based word-association game again. It'll probably give you a pretty good answer if you ask for two large cities like Toronto and Chicago due to the vast number of references to those places in its training data, but it might produce a wildly inaccurate answer for two obscure places.
 
AI is set to make us even stupider than we already are. I was browsing a post about someone hesitant about buying a bike with forward controls. What are they? So I looked them up on duck duck ho. The first four or five hits seem to have been written by AI. None of which gave a satisfactory answer, and one of which claimed that all sport bikes had forward controls.
Most search engines allow you to restrict your search to only show you results that existed prior to 2023, which is one fascinating social response to filter out AI junk articles. Search engines could potentially provide ways to explicitly filter out AI junk, but the cynic in me thinks that they would be much more likely to simply promote their own AI junk answers instead (eg: Google's existing "People also ask" questions right below your search result).

It's not too hard to imagine the internet becoming completely spammed and overrun with AI output. Real people already do that, of course, but it's a matter of exponential scale when you get machines doing it too. Will the sea of trash results end up causing people to simply abandon search engines entirely, in the same way that no one under the age of 40 actually answers their phone anymore because of telemarketers?
 
It's not too hard to imagine the internet becoming completely spammed and overrun with AI output. Real people already do that, of course, but it's a matter of exponential scale when you get machines doing it too. Will the sea of trash results end up causing people to simply abandon search engines entirely, in the same way that no one under the age of 40 actually answers their phone anymore because of telemarketers?
I think the scam websites are already working hard at this. If you search for a very specific thing (say a part number), at least half of the first page will be complete garbage sites that have the search term as well as some nearby words that seem plausible. I'm not sure if they are feeding google this page live somehow (eg a sub-search engine called) or if they just use ai to generate billions of crap pages to be indexed.
 
many years ago an art historian pointed out how in paintings the aristocracy were always the chubby ones being painted. Their size was a direct link to their wealth and so being chubby meant a sign of their status in society. The same historian then pointed out that we have totally reversed that now to the point that it’s almost 100% the opposite and how wealth inequality/marketing to demographics/ agricultural practices/ education etc are all linked to that over the ages.

Yep, also skin pigmentation.

In the western world, untanned skin used to be a sign of wealth and prosperity, being able to stay inside eating cake and drinking tea. Tanned skin meant you worked manual labour in the fields under the hot sun every day.

Fast forward to today, where pasty light skin means you're an office wage slave working under fluorescent lights all day. Tanned skin is now a sign of prosperity and leisure, being able to spend your days reading a book on some sun-kissed beach in the south of France or the Maldives.

We now "buy" these signs of wealth with tanning salons.
 
Relating to the impact of AI on creative industries, I listened to this take on a podcast recently from a TV producer and writer:


Coles notes: It's coming, it's going to become very commonplace to have fully AI generated ads to start, followed by TV/streaming, and then movies. Lots of nuance in there, of course, including whether it's realistic to add enough processing power to accommodate the increased demand, but in the end, it seems inevitable and lots of jobs in the entertainment industry will be lost as a result. Also raises the concept that ultimately, all AI is doing is plagiarising other people's existing work, and how ethical that is. TV and movie studios won't give a crap, of course, and good luck proving it...
 
Last edited:
Relating to the impact of AI on creative industries, I listened to this take on a podcast recently from a TV producer and writer:


Coles notes: It's coming, it's going to become very commonplace to have fully AI generated ads to start, followed by TV/streaming, and then movies. Lots of nuance in there, of course, including whether it's realistic to add enough processing power to accommodate the increased demand, but in the end, it seems inevitable and lots of jobs in the entertainment industry will be lost as a result. Also raises the concept that ultimately, all AI is doing is plagiarising other people's existing work, and how ethical that is. TV and movie studios won't give a crap, of course, and good luck proving it...

Thing is, AI is only as good as the data it’s trained on. Yes there’s some forecasted development from AI right now but you need the unpredictable nature of humans to produce some inventions/developments from which more AI will be trained on. Get lazy or don’t produce and there’s no more data to train on. For example the discovery of buckminsterfullerene took a couple of nutters stringing old submarine batteries together and arcing huge voltages across graphite electrodes. Many top drugs are repurposed other drugs that required clinical trials results to “discover” them….eg Viagra, started off as a cardiovascular drug but in clinical trials patients mentioned a strange side effect.

As for ads, I hate the damn things so I intentionally mislead algorithms by blocking random things and reporting random Facebook ads as “sexually explicit”. Good luck with AI working with that kind of non-predictive behaviour.
 
Thing is, AI is only as good as the data it’s trained on. Yes there’s some forecasted development from AI right now but you need the unpredictable nature of humans to produce some inventions/developments from which more AI will be trained on. Get lazy or don’t produce and there’s no more data to train on. For example the discovery of buckminsterfullerene took a couple of nutters stringing old submarine batteries together and arcing huge voltages across graphite electrodes. Many top drugs are repurposed other drugs that required clinical trials results to “discover” them….eg Viagra, started off as a cardiovascular drug but in clinical trials patients mentioned a strange side effect.

As for ads, I hate the damn things so I intentionally mislead algorithms by blocking random things and reporting random Facebook ads as “sexually explicit”. Good luck with AI working with that kind of non-predictive behaviour.
Im using it more and more as in every day. I think I’ve already increased productivity enough to save 100k a year.

We have ~2000 individual product descriptions of. 800 words to write. We’re doing it with AI help in about 10 days. A competent old skool writer will do about 20/day. This week we’re using AI automate translations of those descriptions to French. A really good translator will do about 20 translations in a day.

We wiil save some money, I’m more excited about saving time.
 
Im using it more and more as in every day. I think I’ve already increased productivity enough to save 100k a year.

We have ~2000 individual product descriptions of. 800 words to write. We’re doing it with AI help in about 10 days. A competent old skool writer will do about 20/day. This week we’re using AI automate translations of those descriptions to French. A really good translator will do about 20 translations in a day.

We wiil save some money, I’m more excited about saving time.

That’s a good example of how the currently trained AI is good enough. You don’t really need advancements in language at this point to write monographs “better”. What can be generated is good enough. Thing is, who checks it though?
 
Thing is, AI is only as good as the data it’s trained on. Yes there’s some forecasted development from AI right now but you need the unpredictable nature of humans to produce some inventions/developments from which more AI will be trained on. Get lazy or don’t produce and there’s no more data to train on. For example the discovery of buckminsterfullerene took a couple of nutters stringing old submarine batteries together and arcing huge voltages across graphite electrodes. Many top drugs are repurposed other drugs that required clinical trials results to “discover” them….eg Viagra, started off as a cardiovascular drug but in clinical trials patients mentioned a strange side effect.

As for ads, I hate the damn things so I intentionally mislead algorithms by blocking random things and reporting random Facebook ads as “sexually explicit”. Good luck with AI working with that kind of non-predictive behaviour.
For TV shows and the like, I think there's enough source material to be plagiarised for quite some time. None of it will be particularly brilliant (though some does have a dream-like quality due to it's barely perceptible unreality), but it'll be cheap and it'll fill lots of hours of programming.

As stated in the podcast, they think there will always be 'hand made' productions, but they'll become more niche in a similar way to bespoke clothes or hand-made jewellery.


Im using it more and more as in every day. I think I’ve already increased productivity enough to save 100k a year.

We have ~2000 individual product descriptions of. 800 words to write. We’re doing it with AI help in about 10 days. A competent old skool writer will do about 20/day. This week we’re using AI automate translations of those descriptions to French. A really good translator will do about 20 translations in a day.

We wiil save some money, I’m more excited about saving time.
Apparently a surprising amount of online news services are already written by AI, even on major sites. Has put a lot of entry-level writers out of work churning out daily rumours about the Royals etc. I'm seeing more and more YouTube video search results that are clearly AI assembled using stock footage, and narrated by an AI voice, too. Real estate listings are another one, most I look at now say they're AI generated, in an unusual but of transparency in the real estate business...
 
Im using it more and more as in every day. I think I’ve already increased productivity enough to save 100k a year.

We have ~2000 individual product descriptions of. 800 words to write. We’re doing it with AI help in about 10 days. A competent old skool writer will do about 20/day. This week we’re using AI automate translations of those descriptions to French. A really good translator will do about 20 translations in a day.

We wiil save some money, I’m more excited about saving time.
On that note. The headline descriptions for schumakers watches were bleeping awful. You only have a few words and wasted 50% of them on meaningless words (rare, desirable, attractive, etc). If I am interested in a six figure watch, I hopefully know enough that all of those words are meaningless.
 
That’s a good example of how the currently trained AI is good enough. You don’t really need advancements in language at this point to write monographs “better”. What can be generated is good enough. Thing is, who checks it though?
We check the work with people as we always have. But the consistency and very good grammar boil it down to making the description matches the sku.

“Good enough” is a moving dot along the development path for every technology. In 1985 an Osborne CPM computer was good enough, 1990 a Motorola brick, in 2000 a Prius.
 
On that note. The headline descriptions for schumakers watches were bleeping awful. You only have a few words and wasted 50% of them on meaningless words (rare, desirable, attractive, etc). If I am interested in a six figure watch, I hopefully know enough that all of those words are meaningless.
If you want a weirder example, check out the video from US News that @FullMotoJacket posted in that thread, which is entirely made up of stock footage. It's even got some borderline tasteless bits like stock footage of someone caring for an elderly man in a wheelchair when talking about Schumacher's condition that must be AI. It also completely repeats itself at the end, not sure if that's an error, or a way to hit a minimum length target. It's actually a fascinating example...
 
On that note. The headline descriptions for schumakers watches were bleeping awful. You only have a few words and wasted 50% of them on meaningless words (rare, desirable, attractive, etc). If I am interested in a six figure watch, I hopefully know enough that all of those words are meaningless.
That does take some seat time to learn.

I had my guy practice writing AI love letters to his wife. AI copy writing machines are very capable of using your instructions.

Ask ChatGPT to write a nice anniversary note to your wife of 10 years. Then experiment by changing the ask to include her name, a memorable experience, make it edgy, funny or short.
 
Another fun way to learn… experiment by asking ChatGPT to write a resignation letter for your favorite politicians.
 

Back
Top Bottom