The current state of IT security, it makes you Wannacry

Recovering from what the press called "the largest-ever cyber-attack", IT departments worldwide are discussing ways to defend against this in the future. In this process, many people are looking for immediate actions while, instead, they need to thoroughly revise their security concepts.

A few lines about myself: Since the year 2000, I have been working as system and network admin for an engineering company in the defense sector. We work in Windows, although a good part of the servers are Linux machines. Our security requirements are very high, next-exit paranoia.

For the last 15 years, I did a lot of forum reading, helped out a lot of people with mostly minor Windows problems on Experts Exchange and other administrator forums. This has granted me a lot of insight into how both admins and users think and act also about their understanding of IT security.

I’d like to start the article with a question:

When was the last time you read something IT-security related that resulted in a significant change of your own behavior or even a change of your company security policy?

Wait, wasn’t there that big Wannacry thing just 3 weeks ago? Oh yes, you did something afterwards. You bought new AV software, spam filtering, and anti-ransomware software all top-notch. Stop, is that what you would call a change of behavior, or even a change of policy?

The last time you significantly changed your security policies might be so far back, it's likely hard to remember. I am not saying that is a sad thing. But if you have a feeling that your policy needs to be reconsidered, you might need new ideas and input and this article could be worth reading for you.

So, fasten your seat-belts, let’s go.

What does your secure IT look like? What does it consist of, what does it take?

You have devices to secure, you have users to teach, you have hundreds of software related things to understand, setup and configure, and you may even be responsible for physical security while needing to be prepared for disaster recovery. You – you might be a small team, or even work alone design and at the same time fulfill a security policy. You, the security superheroes, always on the bleeding edge of technology, one step ahead of the bad guys – yeah, right.

The truth looks more like this: You follow orders. Someone wants software, he chooses it – you buy it, you install it, you configure it in a way you hope is secure. Someone wants hardware, he chooses it, you buy it, set it up, and hope that it is secure.

Your CEO is crazy about iPhones, he says everyone at management level needs one – you nod, you buy, you configure, despite your desire not to mess with Apple devices in your network. Your CEO is calling from abroad, he needs the administrator password immediately (he wants to install a printer) – you certainly give it to him on the phone and he will keep it from then on, because you don’t dare to take it away again. Someone loses his laptop – your head of IT says he told you last year that the company needed encryption. You convince him that this project is on its way, but cannot be realized before next year because too many other things are more important at the moment and your colleague has been ill for a month or so…

Then your security officer (himself working in a different, non-IT department) enters the room and tells you that he needs your current security policy printed out by next week for an audit. Wait… what policy? Isn’t he the one who should have it? Ah, right, there was a document that your predecessor had prepared in 2006 and it looked quite good, if you remember correctly, maybe you can make it by next week and edit it a little here and there so that it reflects the current state. Wait… what is the current state? State of what?

To use the president’s language: “it’s a disaster”. After what I've read and experienced over the years, I am content to say that often, small- or mid-size companies have no concept of security at all. They don’t have a policy and they only react to threats when they become aware of them. They know better, but don’t have the guts to tell management that this will sooner or later have grave consequences.

Check yourself:

  • are fundamental IT decisions made by non-IT people?
  • are all IT decisions evaluated with security in mind?
  • were there ever things desired but not done because experts (you?) said they were insecure to do?
  • would you say most IT processes are transparent enough to be documented for non-IT people so they are enabled to evaluate the security consequences?
  • are key processes so complex that only a few (only you?) are able to understand and maintain them?

Ok, back to the start, the motivation of this article: in May 2017, there was a large “cyber-attack” called “WannaCry” – it was rather an exploit that led to an uncontrolled outbreak of a network worm, but the media preferred to call it “an attack”. Some people have called it the biggest ever and the news has covered it. We have seen hospitals in emergency states because of it. Surely, there were people losing large amounts of money or their jobs because of it and very possibly there were even people dying because of it. Because of WannaCry, a huge amount of security software has been sold, a lot of backup software has been sold, forums have been stormed, and even some governments have declared it a duty for institutions like hospitals, banks, telecommunication giants, water suppliers, administrations, and so on to notify the government of any cyber-attacks they have become aware of.

Can you imagine that? We don’t live in the Stone Age… for decades, those institutions relied on IT infrastructure and they had no obligation to even tell the government or police that they had been attacked. Am I the only one who thinks we’ll see the day when you turn the water faucet handle and nothing comes out, because the IT of the water supplier had been hacked?

We depend on the IT. All ideas, all numbers, all info is in there. If we dare not to make IT security one of the main objectives at work, we risk that our ideas and know-how are robbed, manipulated, or simply deleted.

Whole libraries can be transferred via internet in less than a day. And no one – NO ONE – is able to call himself not endangered. The CIA and NSA are not only home to large hacking departments – they had to admit that they had been hacked themselves. Also companies like “Hacking team”, specialized in developing software that can infiltrate other networks and stealthily snoop data, have been hacked!

How can that be? Isn’t that a little like catching a traffic warden driving drunk? Shouldn’t they know better and be prepared? If not them, then who?

A core part of the nature of data communication is its invisibility. Your data being read, written, and transmitted all the time. Turn on your computer, open notepad or vi, type “aha”, and save the document. How large is that document? 3 bytes. Can you believe it? 3 bytes. Now open your full-blown word processor and type "aha" and save it, check the size – thousands of bytes. What did Word do? Do you feel able to explain what is inside that document? (Sure, someone will yell, "I can!" Good for you.)

We are drowning in data wrapping up the vital data.

Meanwhile, while you were wondering what happened on your computer did you notice that the little LED of your hard drive was constantly signaling activity, the NIC and network switch are blinking constantly? Could it be that the bad guys are already among us at work, right now indexing your files, picking out the fleshy parts of information and transmitting them to God-knows-where? Did you care to ever look at process-monitoring tools for a while and lose yourself in the millions of actions that happen in the background each hour of the day? Did you ask yourself if monitoring even works correctly on infected machines?

Did you ever question what is inside all these encrypted data packages that try to leave your network every second of the day? Who is receiving them? And who will they be forwarded to afterwards – an action that is beyond your control?

We have no way to answer all this. We have to trust that what we think we can measure and monitor is correct and that we are somewhat the master of our digital friends. Security is about trusting. Define how things should run, establish controlled processes, and audit them. That’s the best you can do, right?

Well, no. The best you can do is keep secret things out of the reach of IT.

Huh? What did I just write? Shut down and go back to the Stone Age as in typewriters and paper? No. But rather make yourself aware that no matter how good you are, they will get you someday. Don’t let your behavior rely on promises that some nerds cannot keep. Just because some piece of IT suggests that you are somewhat patched/secure/invulnerable against WannaCry does not mean the next vulnerability is not in use already. How long was the NSA keeping that WannaCry exploit treasured for themselves when no one knew it even existed? 3 years. Some other exploits had been active for decades.

Is anybody still believing in patching? Ever thought about this: your Windows computer has an identifier that is being transmitted when it scans for applicable patches. This identifier can – no big conspiracy needed - possibly be connected to your name. There are many ways to make that happen. Now this patching process goes something like:
“Hey update server, here’s Mr. X’s computer asking for patches, what should we do?”

“Wait a minute. OK, Mr. X is an asset of agency Y, so let’s give him patch 2362372 in the ‘z’ variant.”

“Oh, the z variant, isn’t that the one that can harvest all the indexed data of the machine?”

“Sure, that’s the one. But it can do more; it can also hide the harvesting process from monitoring and cover its traces.”

No, I am not saying that Microsoft or whatever company is forced to cooperate with whatever government. No, of course not how dare you think that? They are also top secure, never would we have to fear that their update servers could be hacked and misused by a third party, no. Those servers are operated and guarded only by the best guys that are so smart, they would never confuse their test systems with the worldwide, accessible, “big boss” update servers. (

Enough. It wasn’t my intention to tell you that your OS is somewhat undermined out of the box, sorry. It was rather a way of expressing my frustration about the firm belief of so many in the need for networks with internet connectivity at all costs (not to mention the cloud). There are really only a few people I read about that try to limit what systems can connect to the internet, although that is surely the most important piece of the security puzzle. Because that is what it’s all about: controlled connectivity. Controlled data flow. Controlling what applications run and being able to overlook what those applications do with the data and where they transmit the data to.

Surely, no one has doubts that we need awareness training, authentication, encryption, patching, trustworthy employees, reliable admins, and people auditing these reliable admins from time to time. But the main thing is: do we know what’s cooking? Or are we already lost in this data swamp and trust applications and OS' as if time hasn’t taught us not to?

What would it take to return to a time where we could control what data is being transmitted just because the amount was so small that people are actually able to say, “Today, 250 MB left the company over the internet and that traffic was authorized before transmitting after manual review - the list of transferred files is as follows...” - ridiculous. This time will not come back – or…?

“KISS – Keep it simple, stupid” is one mottoes of the Unix community. Can we succeed in “keeping things simple”? Can we do it, while still being high-tech and competitive with others? We won’t be able to keep pace with others if we keep it simple, will we?

At least security concepts will benefit if we keep them as simple as possible and if we sometimes dare to say “no” to these comfortable, yet complicated new technologies. Things have gotten way out of reach, way too complex, to call them controlled. If the IT fails to convince the stakeholders that the desire to make anything possible, enable anyone to work from anywhere at any time, is a big risk, then this will strike back. IT needs to learn to act responsibly, not be devoted to productivity.

To mention that WannaCry thing one last time: did you know that some weeks before the first waves of ransomware broke out, there was already a different malware actively using the same vulnerability to spread that did not demand a ransom but, instead, silently enslaved the victims’ CPU in order to mine for bitcoins? You can read about it here, but actually, this is nothing new. That type of crime had already been seen many years before.

The smart bad guys won’t tell you that they have you, no.


Comments (5)

Andrew LeniartIT Professional, Freelance Journalist, Certified Editor
Author of the Year 2019
Distinguished Expert 2020

Hi McKnife. An interesting read to be sure, but I fail to comprehend what the article is trying to promote here, apart from mass paranoia and that no matter what steps or measures are taken, the situation is hopeless? I read through it twice to be sure.

No one could ever argue that it's impossible to be certain an unknown compromise hasn't occurred in any given circumstance, so what possible measures can one take to protect against the unknown?  You raise a lot of interesting questions in your article, but seem to offer very few suggestions or solutions.

Was this just to share your views and opinions of of how seemingly hopeless the risks with today's available technologies are? So what's the solution? Shutting it all down and going back to filing cabinets, pen, paper and snail mail - all of which have their own inherent risks that were also exploited in years past?

I also think that it's not quite fair to point blame on IT administrator's shoulders for consequences caused by an insistence of a technology implementation by employers. These guys more often than not work with tied hands. Other than to strongly express any concerns they might have and warn employers, CEO's and so forth that a decision to implement is not in their best interests, at the end of the day, they have to fulfill the requests as safely as they're able, or face being replaced by someone else that will.

Anyway.. interesting read as I said. Thanks for sharing.
Scott FellDeveloper & Coffee Roaster
Most Valuable Expert 2013

The truth looks more like this:  

100% on point!  

Often it is hard to explain to people why we choose something and they look at you cross eyed. Articles like this are a good resource.  Thanks for the insight and ammo to to share.
Distinguished Expert 2019


Andrew, thanks for the feedback. I am aware that this article is mainly raising question while not answering many.
Maybe it's rather a starting point for discussions than sharing solutions.

You ask "So what's the solution? Shutting it all down..." which is the same that I ask in the article and I answer with "no" immediately afterwards.
You write "it's not quite fair to point blame on IT administrator's shoulders ...These guys more often than not work with tied hands" - that's exactly what I am saying. If the admin is not comfortable making his concerns heard, then he is not employed at the right place and should not fear to be replaced but leave on his own.

Before you start discussing - let's wait for other comments.
Distinguished Expert 2019


Some news that might be of interest for Americans:
In short: US politicians payed for analysing voter opinions on US election-critical topics. Voter data (1,1 TB!) of 198 million Americans was uploaded to an amazon server but the access rights were incorrectly set - it was open to the public and the data was not encrypted. It leaked.
See what I am talking about?
Andrew LeniartIT Professional, Freelance Journalist, Certified Editor
Author of the Year 2019
Distinguished Expert 2020

Before you start discussing - let's wait for other comments.
I guess I've waited long enough now. I just read through this again and still think the world you appear to crave just doesn't exist, nor will it ever. Ideals are one thing, reality another.
are fundamental IT decisions made by non-IT people?
Absolutely, and that would be true for the vast majority of the population around the world. The guy with the fattest wallet (employer) has ultimate control and always will. Stamping feet won't change that fact, and digging in heels will only get you sacked.

All we can do is try our best to educate, and lead by example. Insisting to the point of unemployment isn't a solution. For every IT admin that won't do something management insists on, there will be 20 or more waiting in line to take his place that will. Sad, yet true.
If the admin is not comfortable making his concerns heard, then he is not employed at the right place and should not fear to be replaced but leave on his own.
Raising his concerns is one thing and quite easy to do. Suggesting he should be ready to throw in the towel on what could be his only means of putting bread and butter on the table when his concerns are dismissed by an unsympathetic (and perhaps ignorant?) employer, in order to *force* his concerns to be heard is quite another, and ultimately, an unrealistic and idealistic hope.

Would I be accurate in assuming you've never worked for anyone who would not adopt every security recommendation you made? That you've flat out refused to perform IT-related tasks that were against your own best security practice ideals? If so, I'd be quite surprised, because I credited the head honcho's in the defence sector with bigger balls than that :)


Have a question about something in this article? You can receive help directly from the article author. Sign up for a free trial to get started.