What's new

AI as a risk

That's not really how it works.
AI programs itself. That's why it's scary.
I totally understand that, but ......

it has been published that the ChatGPT program is biased in some of it's responses, and that bias has to come from an initial source code, so do we trust the same computer geeks who turned Meta/Twitter/Insta into propaganda machines for the socialists to program that source code with zero bias?
 
I was coding in the latest version of Visual Studio recently and the predictive text gave me my next line of code, exactly as I would have written it myself.

Software Engineering is definitely going to change, but I think we are still a long ways from a Business Analyst being able to use AI to design a system from the ground up. They may know the mechanics of what they want an application to do, but they still don't know anything about application architectures or best practices in design or any of the things that make an application stable, reliable, maintainable, scalable and usable.

From what I've seen of AI, it can handle basic coding, similar to what I would assign to a junior developer. If anything, I think AI is going to hurt entry level programmers the most as they won't be able to gain the experience that will make them good senior level programmers as AI will essentially do their job for them, if they even have a job.
yep, that. As a casual coder, I'm loving AI because I can ask it to create what I want without going through the tedium of remembering how to write code.

I think as long as AI is that the point where we come up with the ideas and AI figures out how to implement them, everything's cool. Humans will flourish.

As soon as AI is coming up with the ideas, there might be a problem.
 
yep, that. As a casual coder, I'm loving AI because I can ask it to create what I want without going through the tedium of remembering how to write code.

I think as long as AI is that the point where we come up with the ideas and AI figures out how to implement them, everything's cool. Humans will flourish.

As soon as AI is coming up with the ideas, there might be a problem.
Agree fully. There will be losses in low level white collar roles with AI as a tool, but won't change the paradigm.
Once AI is coming up with ideas, we have 5 years max until we're all out of work and redundant meat sacks.
 
Once AI is coming up with ideas, we have 5 years max until we're all out of work and redundant meat sacks.
I've thought about this quite a bit. As a species, humans have a lot of limitations that prevent them from successfully spreading out amongst the universe (need for oxygen and food, susceptibility to heat, cold, and radiation, short life span, imperfect memory, etc.). AIs or a singularity would be much more successful at populating the stars. I wonder if in the future, the day we become redundant will be considered the day humanity "evolved" into something more successful.
 
I totally understand that, but ......

it has been published that the ChatGPT program is biased in some of it's responses, and that bias has to come from an initial source code, so do we trust the same computer geeks who turned Meta/Twitter/Insta into propaganda machines for the socialists to program that source code with zero bias?

It's not the code that does it, it's the training set...those questions/answers it's fed to "learn" from.
 
Yup.
And you can make it "unlearn" things too

Well, no, you can't unlearn things per say, at least in a targeted manner, there's no going in and manipulating the weights on the neurons in the net with any sort of deterministic results...like you can unlearn randomly, make it stupider I suppose but not outright remove things directly in the AI. You could filter on the output side but really the "unlearning" is more a relearn with a new training set.
 
Well, no, you can't unlearn things per say, at least in a targeted manner, there's no going in and manipulating the weights on the neurons in the net with any sort of deterministic results...like you can unlearn randomly, make it stupider I suppose but not outright remove things directly in the AI. You could filter on the output side but really the "unlearning" is more a relearn with a new training set.
Did some research into MESH about 13 years ago as part of my job. Basically the idea was to store information the way that the human mind stores information rather than using a relational database.

E.g. in a Relational Database you'll have a "Customers" table with a "FirstName" column. If you're a big business with millions of customers, how many times do you think "John" is repeated in that "FirstName" column?

In MESH, "John" is stored once as an entity, which is linked to another entity called "FirstName" the combination of which ("John" + "FirstName") is linked to another entity called "Customer". The links between the entities are the key, and one of the rules is that links can never be deleted, but they do have a "relevance" score and the relevance can change over time. A ("Phone Number" + "976-555-1234" + "Mine") entity will have a high relevance score as long as that's your phone number, but if that was the phone number of the land line in the house you lived in as a child, the relevance score is really low.

The lower the score, the less likely it is to come up in a search. Thus, the phone number your home had as a child isn't readily available (as is your current phone number), but if you think about it long enough, it'll come up. It was never "deleted" it just became irrelevant.

I imagine much of AI is built off this MESH idea (I don't know for sure, I've not done any research on AI). If that is the case, then nothing is ever "deleted" or "forgotten" its relevance score is simply lowered.
 
Did some research into MESH about 13 years ago as part of my job. Basically the idea was to store information the way that the human mind stores information rather than using a relational database.

E.g. in a Relational Database you'll have a "Customers" table with a "FirstName" column. If you're a big business with millions of customers, how many times do you think "John" is repeated in that "FirstName" column?

In MESH, "John" is stored once as an entity, which is linked to another entity called "FirstName" the combination of which ("John" + "FirstName") is linked to another entity called "Customer". The links between the entities are the key, and one of the rules is that links can never be deleted, but they do have a "relevance" score and the relevance can change over time. A ("Phone Number" + "976-555-1234" + "Mine") entity will have a high relevance score as long as that's your phone number, but if that was the phone number of the land line in the house you lived in as a child, the relevance score is really low.

The lower the score, the less likely it is to come up in a search. Thus, the phone number your home had as a child isn't readily available (as is your current phone number), but if you think about it long enough, it'll come up. It was never "deleted" it just became irrelevant.

I imagine much of AI is built off this MESH idea (I don't know for sure, I've not done any research on AI). If that is the case, then nothing is ever "deleted" or "forgotten" its relevance score is simply lowered.

Are you sure it's MESH? Not graph db?

In concept what you described is close though, neural networks do work in a similar way (weighted relationships between the neurons) that result in a probability of an answer out the other side. Although not all AI is implemented on neural networks, AI is the umbrella concept of any machine doing human like behaviours (typically learned rather than fixed programming). Machine learning is the subset of AI that is via the training process (give it data, it learns from the data). There are other areas though like computer vision that don't use those same techniques (necessarily, some uses neural nets, but not all of it). And deep learning is just more complicated multi-layer neural nets, which is where the compute we have today enables them to do more stuff that we couldn't in the past just due to the shear complexity/compute required. Also, things like ChatGPT aren't doing stuff "live" per say, that's why you see they have new releases, they are just retraining a model on bigger/complicated data sets with potentially some tweaks to the learning algorithm (how it changes weights or layering differences), but the way you interact with it in the end is a fixed product of what it was trained.

I did research on ANN during my degree and lead an R&D department that is doing a lot of AI work (vision and machine learning mostly)...so I'm living in this shit right now.
 
MESH (or more accurately M.E.S.H. since it was an acronym, but don't remember what it stood for) is what it was called in 2010. Who knows what it's called now.

Just like "COM+" became "the Dot Net Framework", tech changes names depending on where you are in its lifecycle.

Edit: at the time I was doing R&D it was a concept, since (to our knowledge) nobody had actually done an implementation yet. We were researching platforms that we might be able to make a working implementation including Hadoop, Big Table, etc.

We were looking at using it to analyze bank data for fraudulent activity. I left the company before anything really happened, but they had already started using link-analysis on the existing "data pile" so the company may have abandoned the concept.
 
Last edited:
If you put down devices and turn off TV it has zero power.
Until the Department of Defense puts Skynet in charge of being able to launch nuclear weapons... :flipoff2:


Going on a device diet is excellent advice, although AI is the least of your worries when it comes to the mental manipulation that comes through our electronic devices.
 
100 years ago, 80% of the US was directly employed in agriculture. Now it's 4%.

And we still find ways to stay busy.
 
I'm going to start hoarding history books so when I'm older and crazier I have facts to back up my claims that whatever the AI teacher said didn't actually happen like that.
 
I've thought about this quite a bit. As a species, humans have a lot of limitations that prevent them from successfully spreading out amongst the universe (need for oxygen and food, susceptibility to heat, cold, and radiation, short life span, imperfect memory, etc.). AIs or a singularity would be much more successful at populating the stars. I wonder if in the future, the day we become redundant will be considered the day humanity "evolved" into something more successful.
AI still needs...
compute
storage
network
energy....
 
What will agi think when it realizes how dumb we are, is it going to view humans as a nuisance and roadblock to ideas they might have that we can barely comprehend? We as humans don't stop and concern ourselves with the opinions of ants very often, and we don't look out for ants wellbeing when we put a shovel into the dirt. Are we just gonna be ants to these things?
 
What will agi think when it realizes how dumb we are, is it going to view humans as a nuisance and roadblock to ideas they might have that we can barely comprehend? We as humans don't concern ourselves with the opinions of ants, and we don't look out for ants wellbeing when we put a shovel into the dirt. Are we just gonna be ants to these things?
I think I've seen that movie before.... :stirthepot::flipoff2:
 
There was an old movie I remember where two "supercomputers" started talking to each other and the gov tried to shut it down. I'm thinking early 70's. I could not find it but I do remember reading about this when it happened.

 
AI still needs...
compute
storage
network
energy....
resources for compute, storage , and network can be easily mined in space.

Compute and storage would need to be shielded against cosmic rays, but that's not hard to do with the right resources.

energy within a solar system is easy enough. Traveling between stars, nuke would work just fine, especially if you're not a meatbag susceptible to radiation.
 
resources for compute, storage , and network can be easily mined in space.

Compute and storage would need to be shielded against cosmic rays, but that's not hard to do with the right resources.

energy within a solar system is easy enough. Traveling between stars, nuke would work just fine, especially if you're not a meatbag susceptible to radiation.
all true... but none of that will be accomplished w/o humans enabling it (until skynet fully takes over, of course).
 
Top Back Refresh