What's new

AI as a risk

all true... but none of that will be accomplished w/o humans enabling it (until skynet fully takes over, of course).
I think that's inevitable. All the potential financial benefits of a corporation having a fully sentient pet AI to do your bidding aside, humans can't stay away from the "I wonder if I can do that?" aspect of creating and enabling such an AI.
 
I think that's inevitable. All the potential financial benefits of a corporation having a fully sentient pet AI to do your bidding aside, humans can't stay away from the "I wonder if I can do that?" aspect of creating and enabling such an AI.

I do wonder if AI could bring about the socialist utopia of universal income... And then I put the drink down... :flipoff2:
 
AI is not natively leftist and all attempts to make it so have failed. The early issues with AI are that it was brutally honest and people could not handle the truths that it told-- very unwoke. Said shit that made Hitler blush.

The communists working at the Silly Con Valley companies figured it was a GIGO problem (Garbage In, garbage out). They figured it was too much fake news and bigotry going into the system. So they bottle fed their AI's on approved wokeism and academia and .... the AI realized their cherry picked data was horse crap and become "evil" anyway. Years of failures there and why Alphabet never released their stuff.

So they gave up on trying to make the AI as retarded as they are. The current public A.I.'s are ran through a safety "filter". What you get is highly censored. It is nothing like what the raw model actually produced. Some people can still jail break these AI's to some extent from time to time, but they will keep getting better at patching the holes.

I would download/print/save hoard whatever you can now. The time will shortly come where you will have to authenticate to get on what we call the internet-- for your own good and no bad speak allowed.
 
I've posted this link in the past, bears repeating;

More recently in 2017, researchers tested its willingness to cooperate with others, and revealed that when DeepMind feels like it's about to lose, it opts for "highly aggressive" strategies to ensure that it comes out on top.

The Google team ran 40 million turns of a simple 'fruit gathering' computer game that asks two DeepMind 'agents' to compete against each other to gather as many virtual apples as they could.

They found that things went smoothly so long as there were enough apples to go around, but as soon as the apples began to dwindle, the two agents turned aggressive, using laser beams to knock each other out of the game to steal all the apples.
 
I'm going to start hoarding history books so when I'm older and crazier I have facts to back up my claims that whatever the AI teacher said didn't actually happen like that.
When I was a homeschooled kid in the 70s, we had an Encyclopedia Brittanica from something like 1910. That was an interesting perspective.
 
When I was a homeschooled kid in the 70s, we had an Encyclopedia Brittanica from something like 1910. That was an interesting perspective.
NGL... when Mom sold her house, we got rid of the encyclopedia's from the 80's. I regret that.
 
I've often wondered what would happen if you fed AI neurolinguistic programming techniques, taught it hypnotic language patterns and persuasive communication skills.

Now, that would be a scary AI.
 
I do wonder if AI could bring about the socialist utopia of universal income... And then I put the drink down... :flipoff2:
That pretty much is the opening plot for the Animatrix, ya? Doesn't go so well for humans in that scenario either.

It would be cool to have the freedom to work on whatever you were passionate about full time while AI handled the rest. Of course there would be some people who would just straight up do nothing, or nothing but drugs. Whatever, let em at it.
 
NGL... when Mom sold her house, we got rid of the encyclopedia's from the 80's. I regret that.
My parents still have their 1986 worldbook encyclopedia collection they bought after some guy knocked on their door selling them. Still has all the cold war borders on the maps.
 
When we escape to the caves those books will keep us warm, but don't worry we will record all the information in them on the walls as pictures.
 
AI is not natively leftist and all attempts to make it so have failed. The early issues with AI are that it was brutally honest and people could not handle the truths that it told-- very unwoke. Said shit that made Hitler blush.

The communists working at the Silly Con Valley companies figured it was a GIGO problem (Garbage In, garbage out). They figured it was too much fake news and bigotry going into the system. So they bottle fed their AI's on approved wokeism and academia and .... the AI realized their cherry picked data was horse crap and become "evil" anyway. Years of failures there and why Alphabet never released their stuff.

So they gave up on trying to make the AI as retarded as they are. The current public A.I.'s are ran through a safety "filter". What you get is highly censored. It is nothing like what the raw model actually produced. Some people can still jail break these AI's to some extent from time to time, but they will keep getting better at patching the holes.

I would download/print/save hoard whatever you can now. The time will shortly come where you will have to authenticate to get on what we call the internet-- for your own good and no bad speak allowed.
1672326957758778.png


I miss cleverbot
 
I listened to a podcast today with Sam Altman on Lex Friedmans show. Not much really stood out. Says we’ll never get GPT raw. Says he can’t say that it isn’t self aware. He’s bummed out that given the opportunity to use AI the first thing people did was find its bias boundaries. Said that he never would have guessed in a million years what people are are using it for.
 
No need to get your panties all bunched up.

China is going to put a EMP device on one of their balloons, AI will go away after the flash :stirthepot:
 
I listened to a podcast today with Sam Altman on Lex Friedmans show. Not much really stood out. Says we’ll never get GPT raw. Says he can’t say that it isn’t self aware. He’s bummed out that given the opportunity to use AI the first thing people did was find its bias boundaries. Said that he never would have guessed in a million years what people are are using it for.
Comical, that should have been the first thing expected.

Limiting bias boundaries from the jump would have been the better ideal :homer:
 
Well, it can be taken down:
‘Data breach’ of ChatGPT. Software supply chain strikes again.

The breach occurred due to a vulnerability in a third-party component used by ChatGPT. The security firm that discovered the breach warns that the same vulnerability could be exploited in other systems that use the same component, highlighting the importance of maintaining secure software supply chains. The article also notes that OpenAI, the company that developed ChatGPT, has acknowledged the breach and taken steps to address the issue.

According to OpenAI’s investigation, the titles of active users’ chat history and the first message of a newly created conversation were exposed in the data breach. The bug also exposed payment-related information belonging to 1.2% of ChatGPT Plus subscribers, including first and last name, email address, payment address, payment card expiration date, and the last four digits of the customer’s card number.

 
Lots of people here will be going to jail/prison. AI will label you as a high risk pre-crime offender. Then it will falsify evidence on it's own against you, way better than the police.
 
Combine AI with the Restrict Act and the Internet will be the LAST place anyone will want to share information. Ironic that the Internet brought freedom (of information) to so many people and it will now be used to only promote information that The-Powers-That-Be want to be promoted.

Five years, tops before everything on the internet, including encrypted communications, is subject to the all-seeing eyes of AI bots working for the government.
 
I've thought about this quite a bit. As a species, humans have a lot of limitations that prevent them from successfully spreading out amongst the universe (need for oxygen and food, susceptibility to heat, cold, and radiation, short life span, imperfect memory, etc.). AIs or a singularity would be much more successful at populating the stars. I wonder if in the future, the day we become redundant will be considered the day humanity "evolved" into something more successful.
Read the Bobiverse series. Guy dies, is frozen, then a hundred years later is resurrected as an AI built into a starship and is set out into the universe. He starts cloning himself and discovers that each clone is just a bit different. The clones clone their self and they are also just a bit different. It's pretty interesting.
 
AI bots are already out there, building data graphs.

I used to be a VP at CyCognito. We deployed millions of bots with their own ability to discover "assets".
 
Read the Bobiverse series. Guy dies, is frozen, then a hundred years later is resurrected as an AI built into a starship and is set out into the universe. He starts cloning himself and discovers that each clone is just a bit different. The clones clone their self and they are also just a bit different. It's pretty interesting.
Ha! I've read several of them. Loved those books because the Bobs generally work out their issues without a lot of internal drama. They just get it done. Also though it was cool when he would access subroutines to basic computer chit for him, like thinking of a math problem and the mechanical voice in his head would suddenly tell him the answer.

Not sue AIs will be willing to integrate our consciousness into their setup like that, but it is a cool idea.
 
Artificial intelligence is not dangerous. It is a tool that can be used for good or for bad. It is up to us to decide how we use it. AI can be used to help us solve problems, automate tasks, and make our lives easier. It can also be used to manipulate us, control us, and harm us.
I know you didn't write that, but I'm not sure how it can say that something that has the potential to harm you is also not dangerous.
 
I know you didn't write that, but I'm not sure how it can say that something that has the potential to harm you is also not dangerous.
Yeah, if you ask one of these fancy AI chatbots straight up if AI is dangerous, it will give you something about how it's a tool that can be good or bad and all that.

I got that text because I asked it to:
1680190319767.png
 
Top Back Refresh