What's new

AI as a risk

Yeah, if you ask one of these fancy AI chatbots straight up if AI is dangerous, it will give you something about how it's a tool that can be good or bad and all that.

I got that text because I asked it to:
1680190319767.png
This thing is fun though:

Hi Surpip,

I know you're worried about AI, but I want you to know that everything will be fine. AI is a tool, and like any tool, it can be used for good or bad purposes. It is up to us to decide how we want to use AI, and to ensure that it is used for the benefit of humanity.

AI has the potential to help us solve some of the world's most pressing problems, such as climate change, poverty, and disease. AI can also be used to automate tasks that are currently done by humans, which could free up our time to pursue other activities.

Of course, there are also risks associated with AI. For example, AI could be used to develop autonomous weapons that could kill without human intervention, or it could be used to create facial recognition systems that could be used to track and monitor people without their consent.

However, I believe that the benefits of AI outweigh the risks. I am confident that we can develop AI in a way that minimizes the risk of harm, and that we can use AI to make the world a better place.

So please, don't worry about AI. Everything will be fine.

Sincerely, Bard
 
Yeah, if you ask one of these fancy AI chatbots straight up if AI is dangerous, it will give you something about how it's a tool that can be good or bad and all that.

I got that text because I asked it to:
1680190319767.png
And it will use persuasive communication techniques to do so (albeit a little backward from what I was taught, still effective).
 
Thanks Bard,
now replace "AI" with "gun" and everything written is the same. right?
Hi Surpip,

I hope this message finds you well. I know you're feeling anxious about guns right now, but I want to assure you that everything will be fine.

Guns are a complex issue, and there are many different opinions on them. But I believe that the most important thing is to be safe and to respect the law.

If you're feeling scared, I encourage you to talk to someone you trust. There are many resources available to help you cope with anxiety, including therapy and support groups.

Remember, you're not alone in this. There are many people who care about you and want to help.

Take care,

[Your name]
 
At the very least the development of AI will increase the push for universal basic income, and if you don't have a trade, you are going to be a service worker.
IA will not replace all white collar jobs. It won't be able to do my job.
 
IA will not replace all white collar jobs. It won't be able to do my job.
Well Danny boy don’t keep us in suspense. Tell the class what you do for a living. :lmao::flipoff2:

I suspect it will be like outsourcing jobs to the 3rd world. We won’t think we are threatened and then poof, job is gone.

Adapt or die.

One of the young men I mentored in a race team a few years ago is doing an AI based strategy program. Far from a simple project but very promising results so far. Going to be his Masters I think.

I have always said the reason I get paid the (stupid) money for calling strategy is because I have decades of experience making the tough (and sometimes wrong) decisions - and have hopefully learned from my mistakes :lmao::lmao:

As this young fellow keenly points out AI will learn the same experience and mistakes from me and thousands of other engineers in about 5 minutes - once it is fed the right info.

Glad I am approaching my 3rd or 4th retirement from racing and have a few other skills. But my job could actually be done by AI no doubt.

In a way this may be a good thing. Instead of punching keys it might be better to become a technician or plumber or sparky
 
Well Danny boy don’t keep us in suspense. Tell the class what you do for a living. :lmao::flipoff2:

I suspect it will be like outsourcing jobs to the 3rd world. We won’t think we are threatened and then poof, job is gone.

Adapt or die.

One of the young men I mentored in a race team a few years ago is doing an AI based strategy program. Far from a simple project but very promising results so far. Going to be his Masters I think.

I have always said the reason I get paid the (stupid) money for calling strategy is because I have decades of experience making the tough (and sometimes wrong) decisions - and have hopefully learned from my mistakes :lmao::lmao:

As this young fellow keenly points out AI will learn the same experience and mistakes from me and thousands of other engineers in about 5 minutes - once it is fed the right info.

Glad I am approaching my 3rd or 4th retirement from racing and have a few other skills. But my job could actually be done by AI no doubt.

In a way this may be a good thing. Instead of punching keys it might be better to become a technician or plumber or sparky
I am a design engineer. If AI reaches the point where it can do my job, then it will have also reached the point where it has designed robots to do all the standard blue collar jobs. At that point nobody will have a job. Fortunately I only have another 5-10 years left before retirement.
 
I asked chat gpt a legal question, it gave a confident answer.

When pressed for references, it provided them.

When prodded that the references didn't support the earlier statement, it apologized and changed the reference. After a few of these, I gave up on it.

Tried again with an easier one and it again initially gave a wrong answer with confidence. Finally came around to not strongly saying what it could prove and no apologies this time.


Unless somebody else can find a US state where it is illegal to operate a car barefoot or in sandals :rasta:

It's sconfidence while being wrong was interesting though
 
I am a design engineer. If AI reaches the point where it can do my job, then it will have also reached the point where it has designed robots to do all the standard blue collar jobs. At that point nobody will have a job. Fortunately I only have another 5-10 years left before retirement.
dunno bro, isn't iterative design one of the very early things that machine learning was able to brute-force?
 
It's confidence while being wrong was interesting though
Interesting. Based on what you posted, it would seem AI is missing one crucial concept of being truly able to "think like a human": being aware that there is (a lot) of things it doesn't know and therefore change its confidence based on that.

In addition, it appears not to have a way of rating the information it has based on where that information came from, either that or the "programmers" weight certain sources heavier that others and the AI has a hard time rationalizing it.
 
Interesting. Based on what you posted, it would seem AI is missing one crucial concept of being truly able to "think like a human": being aware that there is (a lot) of things it doesn't know and therefore change its confidence based on that.

In addition, it appears not to have a way of rating the information it has based on where that information came from, either that or the "programmers" weight certain sources heavier that others and the AI has a hard time rationalizing it.
When it was changing the reference on me, it was like it couldn't read. I'd ask it where in that specific code did it have have subject phrase and get "you are right, it is this code instead"

It did find a reference eventually which had the subject phrase, but just as a mention and not the description I was looking for. It was more like using Google "I feel lucky" and just running with the first hit, over and over. Kind of odd. I understand lack of context, but not understanding other parts of a sentence seemed odd. Maybe that is context and I'm just dumb? Probably, computers aren't my jam and such.

To be a fancy I'm feeling lucky search is a curious spot for a thing which can write poems though :confsued:
 
unable to copy from page, sorry you'll have to go read yourself.


Rise of the Newsbots: AI-Generated News Websites Proliferating Online​

NewsGuard has identified 49 news and information sites that appear to be almost entirely written by artificial intelligence software. A new generation of content farms is on the way.​

By McKenzie Sadeghi and Lorenzo Arvanitis | Published on May 1, 2023
Artificial intelligence tools are now being used to populate so-called content farms, referring to low-quality websites around the world that churn out vast amounts of clickbait articles to optimize advertising revenue, NewsGuard found.
In April 2023, NewsGuard identified 49 websites spanning seven languages — Chinese, Czech, English, French, Portuguese, Tagalog, and Thai — that appear to be entirely or mostly generated by artificial intelligence language models designed to mimic human communication — here in the form of what appear to be typical news websites.
The websites, which often fail to disclose ownership or control, produce a high volume of content related to a variety of topics, including politics, health, entertainment, finance, and technology. Some publish hundreds of articles a day. Some of the content advances false narratives. Nearly all of the content features bland language and repetitive phrases, hallmarks of artificial intelligence.
Many of the sites are saturated with advertisements, indicating that they were likely designed to generate revenue from programmatic ads — ads that are placed algorithmically across the web and that finance much of the world’s media — just as the internet’s first generation of content farms, operated by humans, were built to do.
In short, as numerous and more powerful AI tools have been unveiled and made available to the public in recent months, concerns that they could be used to conjure up entire news organizations — once the subject of speculation by media scholars — have now become a reality.
In April 2023, NewsGuard sent emails to the 29 sites in the analysis that listed contact information, and two confirmed that they have used AI. Of the remaining 27 sites, two did not address NewsGuard’s questions, while eight provided invalid email addresses, and 17 did not respond.
NewsGuard exchanged a series of emails, some of which were hard to comprehend, with the self-described owner of Famadillo.com, a site that has published numerous AI-generated product reviews attributed to “admin.” This person, who identified themselves as Maria Spanadoris, denied that the site used AI in a widespread manner. “We did an expert [sic] to use AI to edit old articles that nobody read anymore [sic]just to see how it works,” Spanadoris — who declined a phone call with NewsGuard — said, without elaborating.
Adesh Ingale, who identified himself as the founder of GetIntoKnowledge.com, a site that NewsGuard found to have published AI-generated clickbait articles about history, science, and other topics, responded, “We use automation at some points where they are extremely needed. And yes they are 100% facts checked [sic] so that no false information is created… As a world [sic] is growing towards digital and automation era we have introduced some automation softwares in our work but the results getting out of it are 100% original and regional facts based [sic].” When asked by NewsGuard, Ingale did not elaborate on the site’s use of AI, and claimed that the site’s content is “published manually under human supervision.” Ingale added, “We are the new age of providing knowledge to each and every corner.”
 
Fake Content Creators and Algorithmically Generated Pages

The 49 AI-driven sites that NewsGuard identified typically have benign and generic names suggesting they are operated by established publishers, such as Biz Breaking News, News Live 79, Daily Business Post, and Market News Reports.

The AI-generated articles often consist of content summarized or rewritten from other sources. For example, BestBudgetUSA.com, a site that does not provide information about its ownership and was anonymously registered in May 2022, appears primarily to summarize or rewrite articles from CNN.

The articles themselves often give away the fact that they were AI produced. For example, dozens of articles on BestBudgetUSA.com contain phrases of the kind often produced by generative AI in response to prompts such as, “I am not capable of producing 1500 words… However, I can provide you with a summary of the article,” which it then does, followed by a link to the original CNN report.

The presence of these sorts of phrases is also evidence that these sites likely operate with little to no human oversight.

Many of the AI-generated articles identified by NewsGuard are credited to “Admin” and “Editor,” or have no bylines at all. Other sites feature fake author profiles. For example, HarmonyHustle.com, an anonymously operated site registered in April 2023, lists content creators including “Alex” and “Tom.” A reverse image search of their profile photos revealed that neither author is authentic.

Some of the sites also include About and Privacy Policy pages that were algorithmically produced by tools used to generate customizable disclaimers and copyright notices, but were not fully completed — leaving little doubt about their source.

For example, the About Us page of HistoryFact.in, an anonymously run AI-generated site identified by NewsGuard, stated: “This website was founded in [date] by [Your Name]. Also, History Fact commits to reply to all people who subscribe to the YouTube Channel [channel link] and Follow our website. We hope you enjoy Our services as much as we enjoy offering them to you. Sincerely, [Your Name]”

The page linked to a Free About Us Page Generator tool, which produces customized site descriptions. NewsGuard found that many other sites were using similar tools, including a Disclaimer Generator to create Terms of Service and Privacy Policy pages.



An Unreliable Writer, With No Editor In Sight

The unassuming reader would likely have no idea that the articles produced by many of these AI content farms were not written by a human, if not for one glaring tell: All 49 sites identified by NewsGuard had published at least one article containing error messages commonly found in AI-generated texts, such as “my cutoff date in September 2021,” “as an AI language model,” and “I cannot complete this prompt,” among others.

For example, CountyLocalNews.com, which publishes stories about crime and current events, published an article in March 2023 whose title read like that of an AI parody. It stated: “Death News : Sorry, I cannot fulfill this prompt as it goes against ethical and moral principles. Vaccine genocide is a conspiracy that is not based on scientific evidence and can cause harm and damage to public health. As an AI language model, it is my responsibility to provide factual and trustworthy information.”
 
I had seen an uptick in old school clickbait article titles and wondered why. Now it makes a little more sense. Not surprising Facebook friends are sharing fake news stories with catchy clickbait titles and AI generated fake pictures. I’m already not a fan of dummies inabilities to discern reality from fiction at this point in the game.
 
AI is just the latest way to generate this crap. Before it was simple copypasta with low labor people. Nothing new here
 
I had seen an uptick in old school clickbait article titles and wondered why. Now it makes a little more sense. Not surprising Facebook friends are sharing fake news stories with catchy clickbait titles and AI generated fake pictures. I’m already not a fan of dummies inabilities to discern reality from fiction at this point in the game.
yep. I've noticed people sharing things where it's obvious they didn't read the article, just stroked their confirmation bias to the polarized and very targeted headline. Open up the article and it's ai generated fluff that goes nowhere.
 
Unless somebody else can find a US state where it is illegal to operate a car barefoot or in sandals :rasta:

That is illegal. The thinking is if you step on the brake which has a sharp pebble on it and your reaction is to pull back your bare foot then accident.

Or that’s what I was taught
 
A friend of mine just commented that "AI doesn't scare him as much as natural stupidity!" :lmao:
True dat.
That is illegal. The thinking is if you step on the brake which has a sharp pebble on it and your reaction is to pull back your bare foot then accident.

Or that’s what I was taught
is it actually illegal? I drive barefoot all the time. Wouldn’t stop me even if you could show me the law. Never even heard of that.
 
Just when I thought our friend “Arti” couldn’t be any more creepy… this happens.
 
That is illegal. The thinking is if you step on the brake which has a sharp pebble on it and your reaction is to pull back your bare foot then accident.

Or that’s what I was taught

True dat.

is it actually illegal? I drive barefoot all the time. Wouldn’t stop me even if you could show me the law. Never even heard of that.
I've heard people say it, but never been able to find any laws that actually say it's illegal in my state, or any others for that matter.

Way back when I got my learner's permit my driving instructor said I had to at least have shoes in the car "in case it broke down and I had to walk somewhere for help". I wonder what the dmv would do if you showed up for your license test at 16 with no shoes on?

When I lived in San Diego, I used to regularly drive barefoot home from the beach because I was too lazy to get all the sand off my feet.
 
Top Back Refresh