What's new

Information Technology Rant

Sandy Johnson

Harry Member
Joined
May 19, 2020
Member Number
247
Messages
2,859
Loc
Spreckels, Ca
I dont usually like talking about my job, but this one is killing me. We handle public information every day and we use software built by a company housed locally here to manage that data. This company also builds software that handles election data(not related to anything we do here). They recently had a security breach and some of their systems were questionable, so we disabled accounts and changed passwords on anything they may have had access to. They have since(supposedly) cleaned up their issues and I was working with one of their employees to get them logged into our TEST system to help set something up. That person couldn't get the password right, so to verify, they emailed the password to me over plaintext. :mad3: So now I had to change it again and go through the whole process of getting them the password.

So yeah, no need for either party to do shady election stuff. There's plenty of incompetence flying around to let bad actors get to our data as it is. :rolleyes:
 
why did they call the government IT guys? why not a much better and much more qualified private IT firm to fix their mess ups?
 
I had a vendor e-mail me a password once. They were gone shortly after. I would make sure that whoever this idiot is gets reported to their boss. Someone that dumb does not need to be in IT.
 
why did they call the government IT guys? why not a much better and much more qualified private IT firm to fix their mess ups?

Ha ha, sweet burn :rolleyes: Except that wasn't what was going on.

They didn't call government IT guys. I had hit them up to get one of their people to use their in-house proprietary migration tool to move some data over from our production environment to our testing environment. They needed to be able to login to our test servers to do so. From what I have heard, they were not able to solve their issue, they ended up paying the ransom instead.

I like how you assumed us government IT people must not be as qualified somehow. We're not the ones who got breached and we're not the ones who can't follow basic IT security protocol with administrative passwords even after they just had everything compromised. I'll take any one of my guys here over the people I have had to work with from that company any day.
 
I had a vendor e-mail me a password once. They were gone shortly after. I would make sure that whoever this idiot is gets reported to their boss. Someone that dumb does not need to be in IT.

They definitely got called out to our account manager over there. I don't like to wreck other people's jobs, but that was sketchy.
 
They definitely got called out to our account manager over there. I don't like to wreck other people's jobs, but that was sketchy.

I don't enjoy that either. But fuck, sometimes stupid needs to hurt.
 
Meh, emailing a test system password isn't the end of the world.

What bothers me more is you asking for production data to be moved to a test environment. You never use prod data in that way.
 
Meh, emailing a test system password isn't the end of the world.

What bothers me more is you asking for production data to be moved to a test environment. You never use prod data in that way.

A couple things-
1. We have two different types of test environments. One is internal for our IT department to mess around with, break stuff, etc. Another is one for our operations people to test data driven things on. The mess around environment doesn't have production data, the other has to so our ops people can test changes they want to make against prod data.

2. This was a domain admin account password specific to them. We'd love to lock them down further, but the nature of their crappy software requires they have all the access to both prod and test environments. I'm not going into more detail than that because I don't actually want to talk about any vulnerabilities we are forced to accept due to their POS software(you can probably guess we didn't have a say in picking what we use).
 
Meh, emailing a test system password isn't the end of the world.

What bothers me more is you asking for production data to be moved to a test environment. You never use prod data in that way.

You say that. But I would bet you lunch if they suck as bad as it sounds like they do they will use the same PW for both environments. :laughing:
 
Meh, emailing a test system password isn't the end of the world.

What bothers me more is you asking for production data to be moved to a test environment. You never use prod data in that way.

Pulling prod data into test environments (where nobody loses money when shit breaks) to be replayed for troubleshooting is SOP in the finance world.
 
A couple things-
1. We have two different types of test environments. One is internal for our IT department to mess around with, break stuff, etc. Another is one for our operations people to test data driven things on. The mess around environment doesn't have production data, the other has to so our ops people can test changes they want to make against prod data.

2. This was a domain admin account password specific to them. We'd love to lock them down further, but the nature of their crappy software requires they have all the access to both prod and test environments. I'm not going into more detail than that because I don't actually want to talk about any vulnerabilities we are forced to accept due to their POS software(you can probably guess we didn't have a say in picking what we use).

1. That still isn't how you do it, prod data stays in prod systems, period. That environment for ops, should be treated as a production system if it contains production data. Even then, I'm not sure I'd be making a copy for ops people to mess around on, they should get simulated data. Data security and privacy are a big deal right now.

2. I can imagine. Really nothing should require full domain admin to run (could need it to install clustered services or something I suppose) unless they're writing domain management software, just sounds like someone being lazy about setting granular permissions to only the things they need.

I suggest you need an IT Audit by a professional and then penalties against them for not fixing their security flaws.

(I'm a software solution architect who works with some serious shit)
 
Pulling prod data into test environments (where nobody loses money when shit breaks) to be replayed for troubleshooting is SOP in the finance world.

You recreate the data stream, you don't actually copy prod data, and you definitely anonymize everything that comes out of prod before it ever gets eyes on it. That's SOP.
 
You recreate the data stream, you don't actually copy prod data, and you definitely anonymize everything that comes out of prod before it ever gets eyes on it. That's SOP.

The stream is the data. I'm sure your clients will love it when some obscure buffer overflow you supposedly fixed rears its head in prod because you didn't test with the data stream that triggered it. Why don't you go fling poo with all the other Javascript monkeys. :flipoff2:
 
The stream is the data. I'm sure your clients will love it when some obscure buffer overflow you supposedly fixed rears its head in prod because you didn't test with the data stream that triggered it. Why don't you go fling poo with all the other Javascript monkeys. :flipoff2:

I'm well aware of event sourcing and how it works.

When you fix bugs, you recreate the problem first and then fix it...you don't need prod data to do that, just sayin'. I can't think of a single situation that you actually need to use the exact prod data to recreate the problem, it's sometimes more difficult and time consuming to do, but at least in my world, data privacy and security trumps the time it takes to recreate the data. Hell, some of the systems we deploy we don't get any access to the production environment once commissioned. Everything has to be provided as deployment packages or logs without any personal details shipped back to us for troubleshooting. When you design a proper enterprise level system with the right tooling, testing, etc., this isn't a challenge.
 
This is too common. I saw it last week on a harmless internal system but it shouldn’t happen
 
I'm well aware of event sourcing and how it works.

When you fix bugs, you recreate the problem first and then fix it...you don't need prod data to do that, just sayin'. I can't think of a single situation that you actually need to use the exact prod data to recreate the problem, it's sometimes more difficult and time consuming to do, but at least in my world, data privacy and security trumps the time it takes to recreate the data. Hell, some of the systems we deploy we don't get any access to the production environment once commissioned. Everything has to be provided as deployment packages or logs without any personal details shipped back to us for troubleshooting. When you design a proper enterprise level system with the right tooling, testing, etc., this isn't a challenge.

Ok then genius, explain how you reproduce the issue without either replaying at least some prod data at the thing or wasting a ton of time. :shaking:

You sound like you're from the world of web bullshit. Fintech is different with different workflows.
 
IT rant:

Back in the 1990s before all of you newbs knew about computers, I had certifications from Bay Networks, 3com, and Novell. The IEEE didn't even respond to RFI's for years at a time.

At that time, I was Stewart from eTrade.

Ebn4Alb.png


I did what I wanted, came in when I wanted to, and had control of everything. I wouldn't even think of giving myself all of the passwords now, but this was before any type of Directory Services (active directory for you newbs), and basically you had to have the keys to all of it.

I could get away with anything because I worked on the weekends, wayyyyy late at night, all nighters were regular, etc. I made the fancy TV screen print out money.

I sat in meetings with Doctors, CEOs, Hospital Admins, and I had no college degree. I made a boatload of money, and I was everybody's hero.

Then the Professional Management Class got involved. These are do-nothings with MBAs who recognized how much of a fast track it would be inserting themselves between me and the people that IT served.

My work got reduced to the equivalent of Maintenance at a factory. I was a smart guy that ran all the machines, but that just made Brent the know-nothing moron CIO resentful of me as a necessary evil.

Now IT sucks. It fucking sucks. It's the department that has the greatest intellectual gulf between the worker bees and Management. Managers are typically 105 IQ midwits who were smart enough to cheat their way through grad school hungover. So they hired stupider and stupider IT workers to granularize the power and authority and direct it all to them. Now if you want to image a server or reset a database, you have to get permission from 4 people who have nothing to do with those systems. The main part of the job is protecting the company from people saying the n-word on message boards or surfing porn. IT is now HR: it's a department to protect a company's ass.

IT is now a cost-center. The only thing that the Board or anyone in upper Mgmt sees is red ink next to IT.

And everyone is an expert. Some fucking 35 year old asshole who set up a home network now makes all kinds of requests about what the system should magically do. They will spend long minutes explaining to you how things should be and how you're doing it wrong. You have to humor these fags because they're the Users.

In 1994 I would just tell those people to fuck off in so many words, and if they got uppity I'd march into the nearest manager's office and soon that person would be reduced to proper obsequiousness.

For a while you could move farther and farther away from the bullshit by moving into infrastructure. But Cisco got on the MCSE 'everyone is certified! you get a cert! you get a cert! you get a cert!' game and now you have retards from High School coming in and telling you why you should use EIGRP instead of static routes. Now your job is de-educating some kids who got Cisco certs from a Votech program in high school. No fucking thanks.

Cisco is exactly like Microsoft. Play by Play. They market to Managers and offer over-expensive, low-performance, over-optioned products. Bay was better. It was faster, more reliable, and cheaper. But you had to buy into Cisco because they were the hot up and comer. I mean yeah by '98 or so the shit was fun and sexy, but did a 5 location manufacturing firm really need a $10,000 switch at the central office? Nahhh. But hey...

IT is all fucked up. Shitty industry.
 
as long as we have the nerds together ..... :flipoff2:

A few days after launching IBB somebody PM'd me and offered to host this at their data center and sent me their website. I remember what the website looked like, but for the life of me can't remember the company name person that PM'd me.

Was that any of you?
 
Ok then genius, explain how you reproduce the issue without either replaying at least some prod data at the thing or wasting a ton of time. :shaking:

You sound like you're from the world of web bullshit. Fintech is different with different workflows.

You read the logs, understand the scenario that happened, recreate the data...not that hard. If the data is difficult to manually create, you write tools/scripts to do it for you. If you don't at least log the error and basic info you're pootched and failed long ago. I am saddened by the fact that some financial institutions don't follow this practice, prod data NEVER leaves prod systems in my world, and like I said, in a lot of cases, it's nearly impossible to even get it.

Also, that's twice you accuse me of being a web/javascript punk. Without saying much, if you have flown commercially in the last decade, the shit we provide has impacted you.

as long as we have the nerds together ..... :flipoff2:

A few days after launching IBB somebody PM'd me and offered to host this at their data center and sent me their website. I remember what the website looked like, but for the life of me can't remember the company name person that PM'd me.

Was that any of you?

Not me lol although I have a contact for that small time hosting stuff if you need one.
 
as long as we have the nerds together ..... :flipoff2:

A few days after launching IBB somebody PM'd me and offered to host this at their data center and sent me their website. I remember what the website looked like, but for the life of me can't remember the company name person that PM'd me.

Was that any of you?

Wasn't me. Though if I could justify pulling strings to get us hosted in the facilities I deal with I totally would. "Colocated with NYSE" has a hell of a ring to it. :laughing:

You read the logs, understand the scenario that happened, recreate the data...not that hard. If the data is difficult to manually create, you write tools/scripts to do it for you. If you don't at least log the error and basic info you're pootched and failed long ago. I am saddened by the fact that some financial institutions don't follow this practice, prod data NEVER leaves prod systems in my world, and like I said, in a lot of cases, it's nearly impossible to even get it.

You just don't get it. You don't have logging in production because logging would slow things down too much. This is how basically all trading systems are. You run a low level of logs in prod for speed and then replay in a lesser environment if you need to. This is SOP. You can read the specs and write tools that send whatever you want. That doesn't mean that the real live data doesn't contain special magic that violates the spec, but works, but occasionally breaks intermediary systems. These deviations are sometimes the special sauce that make people money and you will lose customers if you rigidly enforce specs. You can try and test for that but you have no guaranteed way of doing it other than using real life data that came down the pipe.

I also don't think you have an appreciation for what the access control is like on the QA environments. These are not servers sitting in the closet behind the IT department. They're basically prod but in a different data center and with different processes for software upgrades.

Your "my way is the only right way" bullshit is why Commiefornian techies are so consistently able to shit up other states.

Also, that's twice you accuse me of being a web/javascript punk.

Well you're acting like one.
thefinger.gif


Without saying much, if you have flown commercially in the last decade, the shit we provide has impacted you.

My employer's hardware and software is on the critical path of a double digit percentage of the world's stock market activity.
 
as long as we have the nerds together ..... :flipoff2:

A few days after launching IBB somebody PM'd me and offered to host this at their data center and sent me their website. I remember what the website looked like, but for the life of me can't remember the company name person that PM'd me.

Was that any of you?

Maybe 98blacktj or something like that? sorry I remember some kind of year/color/jeep from the old board doing hosting but I can't remember the name.
 
You just don't get it. You don't have logging in production because logging would slow things down too much. This is how basically all trading systems are. You run a low level of logs in prod for speed and then replay in a lesser environment if you need to. This is SOP. You can read the specs and write tools that send whatever you want. That doesn't mean that the real live data doesn't contain special magic that violates the spec, but works, but occasionally breaks intermediary systems. These deviations are sometimes the special sauce that make people money and you will lose customers if you rigidly enforce specs. You can try and test for that but you have no guaranteed way of doing it other than using real life data that came down the pipe.

I also don't think you have an appreciation for what the access control is like on the QA environments. These are not servers sitting in the closet behind the IT department. They're basically prod but in a different data center and with different processes for software upgrades.

Your "my way is the only right way" bullshit is why Commiefornian techies are so consistently able to shit up other states.

My employer's hardware and software is on the critical path of a double digit percentage of the world's stock market activity.


Certainly there are some impacts to performance from logging but it's a necessity, at very very least errors should be logged. I'll admit, our systems are typically only servicing 10's of millions of transactions per day each, so low volume compared to other systems but with distributed architectures it's certainly possible to handle the volume with high performance. You think google doesn't log the billions of hits they get a day?

Also, it's pretty easy to automate testing on API's to try every allowable (and not allowable) combination of inputs. Enforcing traffic is to spec is important and a lot of industries don't tolerate deviation from spec, I would have thought the financial world would be extremely rigid about it. We also have testing for integrations in lower environments, we don't just let anyone connect to prod without going through proper testing. Bugs still happen and when they do, we do analysis, recreate, and then work them through the process.

I have an appreciation for QA environments, we have several levels of Developer, QA, UAT/SAT, Cert, Staging, Load Testing, Pen Testing, etc. sitting below production (with geo replicated DR sites alongside) with each environment designed to fit it's purpose. Some are literally identical to production, others are single boxes sitting in our on-prem IT rooms. Doesn't change the fact that we don't move data off the prod boxes.

There's no doubt you're working in a critical high performance industry and know your (you're? haha) shit, listening to other industry professionals is what makes you better...I'm telling you, best practice is to keep prod data in prod. QA and Prod systems should never interact or have dependencies or share anything. Feel free to search something like "Should you use production data in testing" or whatever and see what the consensus is. At very least I hope the data you bring down to those lower environments is anonymized prior to transport off the source system.
 
Ha ha, sweet burn :rolleyes: Except that wasn't what was going on.

They didn't call government IT guys. I had hit them up to get one of their people to use their in-house proprietary migration tool to move some data over from our production environment to our testing environment. They needed to be able to login to our test servers to do so. From what I have heard, they were not able to solve their issue, they ended up paying the ransom instead.

I like how you assumed us government IT people must not be as qualified somehow. We're not the ones who got breached and we're not the ones who can't follow basic IT security protocol with administrative passwords even after they just had everything compromised. I'll take any one of my guys here over the people I have had to work with from that company any day.

Govt IT people are morons. We have several people in our office that routinely have to explain to the "head" IT guys how to fix the issue. They would do it themselves but we dont have admin right. Fucking retards. Weve already had two of them dismissed because they were so incompetent. The one we have now is half decent, but of course hes a contractor.
 
Certainly there are some impacts to performance from logging but it's a necessity, at very very least errors should be logged. I'll admit, our systems are typically only servicing 10's of millions of transactions per day each, so low volume compared to other systems but with distributed architectures it's certainly possible to handle the volume with high performance. You think google doesn't log the billions of hits they get a day?

Logging at a high level would make things too slow to be economically viable.

Adding more servers to distribute the load to log at a high level without increasing latency would be non-viable on account of the hosting bill in the kinds of data centers our devices need to be in.
 
Logging at a high level would make things too slow to be economically viable.

Adding more servers to distribute the load to log at a high level without increasing latency would be non-viable on account of the hosting bill in the kinds of data centers our devices need to be in.

Yup, so effectively you're taking short cuts by testing using prod data and not having sufficient infrastructure :) Just use that as your excuse, any dev will understand there's never enough money to do everything right lol
 
I’ll never pretend to be that smart, but why not just use a land line to I’ve a password?
 
Top Back Refresh