Ai future? | GTAMotorcycle.com

Ai future?


I see AI as 90% a way to increase productivity and value , and a way to make everyday life better . The next 10% is completely evil . The people that will use it for evil won’t care about any govt legislation or controls , because they will be doing despicable things with it.


Sent from my iPhone using GTAMotorcycle.com mobile app
 
It's putting many people, and will continue to put people out of work. It will also bring a lot of net-positive advancements in our society. Scary to think about, but definitely a double-edged sword like Mac said above.
 
Maybe I'm delusionally optimistic, but if AI makes everything far cheaper and people need to work less, then I'm hoping people will spend more energy trying to improve themselves than crap they bought/leased/financed. Bragging more on social media about books they read and actual ideas rather than the European car, big house, stupid sized TV and fake tits they can't really afford.
 
Maybe I'm delusionally optimistic, but if AI makes everything far cheaper and people need to work less, then I'm hoping people will spend more energy trying to improve themselves than crap they bought/leased/financed. Bragging more on social media about books they read and actual ideas rather than the European car, big house, stupid sized TV and fake tits they can't really afford.
People will spend more time doing manual labour jobs, since the benefits of optimizations go in the pockets of big corporations. I don't like to be pessimistic, but this is where we're already headed.
 
Maybe I'm delusionally optimistic, but if AI makes everything far cheaper and people need to work less, then I'm hoping people will spend more energy trying to improve themselves than crap they bought/leased/financed. Bragging more on social media about books they read and actual ideas rather than the European car, big house, stupid sized TV and fake tits they can't really afford.

Definitely delusionally optimistic if you think that businesses will pass savings onto the consumers. Wealth gap will grow as you need less people between the labor and the owners.

AI will just enhance big tech's ability to use psychological manipulation to maintain everyone's attention
 
It has the potential to be the makings of a "2nd industrial revolution"

Code8 on netflix kinda makes a link to it, where automation and machinery made people "with powers" obsolete. In this case it'll make a lotta people truly obsolete. It's gonna be a wild ride!
 
I thought that was 3D printing ...this would be 3rd.. :unsure:

The convergence of an insanely aging population, Ai and truly skilled robots
...living in interesting times ?? Yep.
 
Speaking as a software developer, AI is just code and data.

Code is easy to write. Good code is very hard to write. Code will do exactly what you tell it to do, not what you intended it to do. The difference is in understanding the context around you want the code to accomplish. If you don't understand that context, your code may work for the extremely narrow set of criteria that you've tested, and fail spectacularly when it is fed anything outside of that context. Or much worse, it will fail subtly or silently in a way that you don't even notice, leading to cascading problems down the line that take weeks of deep investigation to unravel.

Large language models are mostly just "next word guessers", which take a human prompt as input and construct a response by picking words that have high associations with each other, based on their code and the data that they have been trained on. Same sort of thing with AI image generators. Which is a fancy way of saying that a significant part of AIs/LLMs is simply smashing together words or image elements from whatever they've been fed in the first place.

The best way to think of this is to imagine an AI scraping the responses to every oil, battery, counter-steering, or "hadda-lay-er-down" thread in every motorcycle forum that has ever existed, and blending all of the responses together. The AI has no credibility of its own, it's just parroting jumbled up fragments of phrases that have been repeated most often in the data that it was trained on.

One of the massive problems with AIs/LLMs is the data that you feed them. If you feed them data from the open internet, you're going to get a ton of racist, sexist garbage in response, because the open internet is a seething cesspool. Is your medical AI advisor trained on data from actual medical studies and textbooks, or is trained on a collection of facebook groups?
 
It's all very impressive and more than a little worrying. I'm worried about deep fakes and AI-based social conditioning.

But in the end, I choked down the bile and invested in Microsoft's ChatGPT... and made a nice sum of money. So that's something. :)
 
I thought it would take a few more years, but deepfakes are starting to get to the point where most people cannot tell if what they're looking at is real or completely fabricated.

In some sense, that's always been a problem with airbrushed photographs or photoshop, but AI makes that sort of work so much easier to produce now. A <fill in the blank foreign> troll farm doesn't need dozens of people creating and distributing political misinformation anymore, they can have AI create endless variations of the content and spam it far more cheaply and effectively.
 
I thought it would take a few more years, but deepfakes are starting to get to the point where most people cannot tell if what they're looking at is real or completely fabricated.

In some sense, that's always been a problem with airbrushed photographs or photoshop, but AI makes that sort of work so much easier to produce now. A <fill in the blank foreign> troll farm doesn't need dozens of people creating and distributing political misinformation anymore, they can have AI create endless variations of the content and spam it far more cheaply and effectively.
And airbrush/photoshop was normally minor tweaks to a person. Rarely would the result look like somebody else. The ease of making any random celebrity look like they are saying something is shocking (and good enough to trick all but the most cynical).
 
  • Like
Reactions: Ash
And airbrush/photoshop was normally minor tweaks to a person. Rarely would the result look like somebody else. The ease of making any random celebrity look like they are saying something is shocking (and good enough to trick all but the most cynical).
I think I'm just going to shut off the internet in a few years and go raise sheep on a mountainside or something
 
I think I'm just going to shut off the internet in a few years and go raise sheep on a mountainside or something
Can you imagine the crapshow if you reintroduced yourself to society? Leave in 2019 and come back to pandemics, deep fakes and ai.
 
Last edited:
Speaking as a software developer, AI is just code and data.

Code is easy to write. Good code is very hard to write. Code will do exactly what you tell it to do, not what you intended it to do. The difference is in understanding the context around you want the code to accomplish. If you don't understand that context, your code may work for the extremely narrow set of criteria that you've tested, and fail spectacularly when it is fed anything outside of that context. Or much worse, it will fail subtly or silently in a way that you don't even notice, leading to cascading problems down the line that take weeks of deep investigation to unravel.

Large language models are mostly just "next word guessers", which take a human prompt as input and construct a response by picking words that have high associations with each other, based on their code and the data that they have been trained on. Same sort of thing with AI image generators. Which is a fancy way of saying that a significant part of AIs/LLMs is simply smashing together words or image elements from whatever they've been fed in the first place.

The best way to think of this is to imagine an AI scraping the responses to every oil, battery, counter-steering, or "hadda-lay-er-down" thread in every motorcycle forum that has ever existed, and blending all of the responses together. The AI has no credibility of its own, it's just parroting jumbled up fragments of phrases that have been repeated most often in the data that it was trained on.

One of the massive problems with AIs/LLMs is the data that you feed them. If you feed them data from the open internet, you're going to get a ton of racist, sexist garbage in response, because the open internet is a seething cesspool. Is your medical AI advisor trained on data from actual medical studies and textbooks, or is trained on a collection of facebook groups?
100% agree. Speaking as a robotics manager. I pushed back against people overusing the term AI for years but have recently given up as the term is used so much and it is getting so much closer to actual artificial inteligence.

The other one that used to bug me was people saying they designed algorithm as a fancy way of saying "I wrote code"

If anyone here has ChatGPT can you please ask it what the best oil is and start a thread here for us to debate.
 
I’m using it for some work related experiments. I write a letter of introduction for a member of my club travelling internationally, then have chatGPT write one and I intergrate the good parts . Use progressively its incredible . But the downside scares the crap out of me .


Sent from my iPhone using GTAMotorcycle.com
 
I suppose I shouldn't be universally negative. There are systems that are apparently very good at identifying cancer in medical imaging. That's a perfect application for this type of technology.

There are AI projects designed to examine and identify whether images or videos have been altered or fabricated by other AIs, which is cool. Maybe there will be an arms race between them.

Manufacturers are now building into their camera hardware the ability to digitally brand the images/videos with the unalterable signature of the hardware that produced them, meaning you could determine whether media was initially captured by a real physical device or not.

I'd want to go even further, and have every respectable piece of image or video software also cryptographically sign the media with the sequence of alterations that are made to it, so that anyone could take an arbitrary image or video and simply unwind all of the image manipulations to get back to the original image or video.

I do not trust any of the major information-gathering companies (Facebook, Google, etc) to have my best interests at heart in these matters, though. The funding for the cool public-protecting projects will be absolutely dwarfed by the business interests of massive corporations.
 

Back
Top Bottom