What's new

Global IT Outtage-Microsoft/CrowdStrike

MS Windows with third party security software, what could go wrong... :lmao:

hat%2Bcould%2Bpossibly%2Bgo%2Bwrong_dd55c4_3966855.gif
 
I only got into it enough to recognize if what I was looking at was code. If so, send it to someone that knows wtf to do with it :laughing:

IBM 286… DOS 5.0… wrote a split screen word processor that allowed caps and RGB font color.

Favorite project in college
 
I live by KCI, it's been pretty quite today, kinda nice. I've seen several SW planes and one all white one.
 
FML today. Personally, I think this was intentional. In light of other current events, the timing of this reeks. They immediately were saying NOT A CYBER ATTACK, it just happened to be more effective than any other cyber attack in history. I'm not buying that CrowdStrike did zero testing prior to deploying the update. I'd be looking real close at anyone involved. #insidejob
 
Been in the hospital with family members on and off the last couple days. Total shit show this morning with all computer systems down and all logs currently being done by hand. They brought every room a breakfast plate since all the food ordering systems are down. And these are just what has been shared with us.

Was discharged yesterday afternoon. Had security guards at all manged access doors instead of the doorbell cameras. Had to do all of our discharge paperwork manually. The staff of the hospital managed, but I could tell they were struggling bad. The food service staff continued to just bring every occupied room the same plate for lunch and again for dinner. Scheduling things turned into a nightmare. I was actually overall pretty impressed with how well the nursing and doctor staff managed considering how integrated everything is in the hospital under normal circumstances. They had the attached posted on their website:

Screenshot_20240719_173416_Chrome.jpg
 
FML today. Personally, I think this was intentional. In light of other current events, the timing of this reeks. They immediately were saying NOT A CYBER ATTACK, it just happened to be more effective than any other cyber attack in history. I'm not buying that CrowdStrike did zero testing prior to deploying the update. I'd be looking real close at anyone involved. #insidejob
Nah, if it was intentional they would've let it deploy to every machine before triggering the bug. Now THAT would've been an order of magnitude worse.
The US was largely spared because the rest of the world noticed before you lot even woke up.

CrowdStrike, and similar companies, are way worse than you think. Any start-up is dragging around lots of corpses from fast initial development. And then when they become huge, it costs too much to fix all those systems that "work just fine".

Now, HOW exactly they managed to not see this in testing first, no clue. Other than they pushed out the update without testing (wouldn't be surprised), or they never rebooted their test server after deploying the update (also likely).

I hope the rest of the world learns a lesson about the trust they place in companies like this.

As for Windows vs Linux vs Mac, in this case it was just as likely to fuck up on either platform. CrowdStrike runs on all of those systems, with the same permissions, and probably the same shitty source code.

And cloud vs local. Ask all the IT guys with local servers running CrowdStrike how fun this was :flipoff2: It may have taken out a chunk of Azure, but that was resolved a damn sight quicker than all the companies still scrambling to fix their shit.
 
Nah, if it was intentional they would've let it deploy to every machine before triggering the bug. Now THAT would've been an order of magnitude worse.
The US was largely spared because the rest of the world noticed before you lot even woke up.

CrowdStrike, and similar companies, are way worse than you think. Any start-up is dragging around lots of corpses from fast initial development. And then when they become huge, it costs too much to fix all those systems that "work just fine".

Now, HOW exactly they managed to not see this in testing first, no clue. Other than they pushed out the update without testing (wouldn't be surprised), or they never rebooted their test server after deploying the update (also likely).

I hope the rest of the world learns a lesson about the trust they place in companies like this.

As for Windows vs Linux vs Mac, in this case it was just as likely to fuck up on either platform. CrowdStrike runs on all of those systems, with the same permissions, and probably the same shitty source code.

And cloud vs local. Ask all the IT guys with local servers running CrowdStrike how fun this was :flipoff2: It may have taken out a chunk of Azure, but that was resolved a damn sight quicker than all the companies still scrambling to fix their shit.

This is partially the result of a lot of mediocre programmers out there (remember the "everyone should learn to code" pushes), a desire for shiny new features over stable and well tested and management always pushing the dev cycle quicker and quicker.

These outages are only going to get more frequent and worse.
 
I hope companies implement an analog backup program. They should have a system to continue operating without computers.

The plane will still fly, but you can't, because the computer isn't working. I know everyone was doing their best, but if the computer doesn't work, they are shut down, and cannot make a decision.

Like to see 1960s or older systems in place as a backup. Paper and pen
 
This is partially the result of a lot of mediocre programmers out there (remember the "everyone should learn to code" pushes), a desire for shiny new features over stable and well tested and management always pushing the dev cycle quicker and quicker.

These outages are only going to get more frequent and worse.
None of that helps, no.

Disagree. Testing and procedures for deploying changes are getting better faster than devs are getting worse. :laughing:
Hard disagree. I'm a developer, currently tech lead at a medical company. The new hires are all absolute retards, the DevOps team is a bunch of dumbasses, and the only thing the procedures do is make the fuckups take 10x longer :homer:

Devs graduating now can barely do the simplest things without ChatGPT and the likes. It's scary.

And yes, the CrowdStrike shit took us out too. I still haven't shut down my laptop because that stupid file is sitting there, and I don't have the rights to delete it. Printed out my BitLocker key just in case, though.
 
Nah, if it was intentional they would've let it deploy to every machine before triggering the bug. Now THAT would've been an order of magnitude worse.
The US was largely spared because the rest of the world noticed before you lot even woke up.

CrowdStrike, and similar companies, are way worse than you think. Any start-up is dragging around lots of corpses from fast initial development. And then when they become huge, it costs too much to fix all those systems that "work just fine".

Now, HOW exactly they managed to not see this in testing first, no clue. Other than they pushed out the update without testing (wouldn't be surprised), or they never rebooted their test server after deploying the update (also likely).

I hope the rest of the world learns a lesson about the trust they place in companies like this.

As for Windows vs Linux vs Mac, in this case it was just as likely to fuck up on either platform. CrowdStrike runs on all of those systems, with the same permissions, and probably the same shitty source code.

And cloud vs local. Ask all the IT guys with local servers running CrowdStrike how fun this was :flipoff2: It may have taken out a chunk of Azure, but that was resolved a damn sight quicker than all the companies still scrambling to fix their shit.

It depends on the purpose. What was happening while so many of these mission critical systems were down? Impossible to know the target. If something nefarious was a play, I'm sure there would be a trail. It doesn't hurt to look at everyone with access and see who they might be connected to, along with financials. The list should be short.
 
I hope companies implement an analog backup program. They should have a system to continue operating without computers.

The plane will still fly, but you can't, because the computer isn't working. I know everyone was doing their best, but if the computer doesn't work, they are shut down, and cannot make a decision.

Like to see 1960s or older systems in place as a backup. Paper and pen
I'll give you a peak behind the curtain at the airlines: they are all running 1960s based computer programs:laughing:
 

Entire Microsoft Network Goes Down After Greg Removes USB Device Without Clicking 'Eject' First
TECH·Jul 19, 2024 · BabylonBee.com
Click here to view this article with reduced ads.
669ab3420ecd8669ab3420ecd9.jpg





REDMOND, WA — Microsoft servers across the globe crashed Friday after Microsoft employee Greg Wilson, a Principal Software Engineering Lead, removed a USB device without clicking "Eject" first.
"Everything is down. Planes are grounded and people can't get on the Internet," Microsoft CEO Satya Nadella confirmed. "And it's all because of Greg!"
Management first became aware of the problem after a report from security that noted a USB alarm had gone off in Sector 12 where the Windows team works hard adding bugs to the Windows OS. After removing a USB stick without first clicking "Eject" all the mainframe servers promptly crashed.
"No, YOU FOOL! You've doomed us all!" Greg's coworker had cried out, but it was too late.
According to sources, Bill Gates called HQ later in the day to ask why his Microsoft Zune had stopped working and was livid to learn the problem was because of Greg.
At publishing time, Greg was reprimanded by his manager and demoted down to the Xbox division. "Ugh, but that's where all the dummies go!" Greg said.
 
I heard the CEO of CS is the same doofus that was in charge of McAfee in 2010 when they broke almost every Windows computer.
Haven’t followed up on that yet though.
 
I heard the CEO of CS is the same doofus that was in charge of McAfee in 2010 when they broke almost every Windows computer.
Haven’t followed up on that yet though.
One of my friends was a director level at Cisco and went to a vm company (hint hint) that was subsumed by another company... I asked how the transition was going and he said, 'our new overlords DON'T BELIEVE in backups. One of their ideas is to move their server farms from 5 locations (spread over a geo) to one (vegas) and how they did it? yep, rented trucks and loaded up the old servers and moved them to vegas. They were 'down' for a couple of weeks while the trucks rolled.

He told me that his boss told him to just enjoy the time during the buyoout and transition and not worry about anything, they'd eventually have projects for him and his team... this is the world.
 
Top Back Refresh