Chapter 2: There’s no one way to do this

Culture dictates action. When you know which vulnerabilities are the easiest to penetrate and attack, you know where you can make the greatest impact. My first assignment in Saudi Arabia widened my eyes to how local cultural differences affect security and response procedures in different parts of the world. The attackers struck during Ramadan, a month-long sacred holiday, when religious customs are even more highly respected. What better time to have maximum impact?

We originally investigated from Houston, but a few days into it, we realized that the attackers might be on to us. Some 30,000 to 40,000 computers had their MBRs (Master Boot Record) deleted, which meant that they could not be booted up. Where there’s little forensic evidence, it’s harder to determine what happened in the environment. So, what we thought we could resolve remotely turned much bigger. Although Saudi embassies around the world were closed for the holiday, the country opened them especially to my team; we each drove or were escorted to our respective local country embassy, had our passports stamped, and boarded a plane into the country. For one of the team members they even sent a private jet to pick up their passport from their home. This was also a first!

It was an eye opener for me. After dealing with the geopolitics of traveling there and seeing the tension on the ground, we got to work. We quickly realized scale would be greater than we’d ever experienced. Normally when we’re investigating, the company might be operating at 50 percent by the time we go in and there’s some impact to their business operations. We’re also usually able to create a boundary around the issue to contain it, so we can repair whatever has gone wrong in the environment. But in this case, the majority of the environment was destroyed, and the damage so widespread that there was little to contain.

We had to ensure that no other issues were left in the environment. Such a demanding and urgent recovery meant zero downtime in the client facility. We did nothing but work and sleep. The whole country was on holiday except us. It might sound negative, but it wasn’t. In the end, the Saudis were great hosts, especially when they realized we were there to work and nothing more. We worked round the clock with them to help them recover and left them our hard drives on our way out. This is standard practice; nothing leaves the client environment or the country.

Most engagements I’ve worked on cross multiple time zones and geographical boundaries. It’s necessary to be sensitive to, and aware of, the different cultural, social, organizational, and individual norms and behaviors that can impact the success of a project. And if you’re earnest and want to succeed in cyber- security, I recommend having in spades a sense of humor and an open mind.


The testing side – understanding hackers

No two journeys into this profession are alike, especially in cybersecurity. Similar to my road to Saudi Arabia, mine wasn’t a linear one; very few people wake up one morning and decide they’re going into cybersecurity. It’s also not a path that’s easy to choose, and it’s so obscure that most university students don’t even know about it until they’re in a computer science class and find it’s an option.

Most professionals who started around the same time as I probably experienced a unique journey that led them to where

they are today. I started in the industry during the dotcom boom when the internet was in its nascence. There was no career path to cybersecurity then; I simply stumbled into the field. At that time, I was working in the IT audit/security department of one of the big four accounting firms. Asking the same questions everyday— policy, operations, risk management—had me questioning what I was really doing there. I was bored. I needed to move—and fast.

I have been interested in computers from a young age. I always liked hacking and breaking into things, so I suppose I have a somewhat deviant mind. Things were quickly taking off online—businesses were going live, websites were cropping up everywhere, email was becoming a mode of communication for the masses, so I couldn’t wait. “Online is the way it’s going,” I told my employer as I tried to persuade him to fund a master’s course. It worked; they sponsored my two-year part-time Master of Science in Information Security at Royal Holloway at the University in London, and this is where I received my formal cybersecurity education.

After graduation, I moved to ISS (Internet Security Systems), a very specialist information security team in Atlanta and worked in their X-Force team as a consultant. (Back then, accounting firms hadn’t yet fully grasped the implications of cybersecurity nor understood the skillsets one needed to be in security, so going to a specialist company was the best thing I could do advance my career.) At ISS I honed my technical skills; ethically breaking into systems to test a company’s security posture. This is also where I got my hands dirty and feet wet performing vulnerability assessments and penetration testing.

Next, I realized that I needed to manage security for an organization if I was ever going to become a well-rounded consultant, matching technical skills with strategic management ones. I left ISS and went on to further sharpen my technical skills at IBM, where they allowed me to run their ethical hacking team in Europe and to be responsible for the security of their larger client organizations as the CISO (Chief Information Security Officer).

Companies, however, consistently paid little more than lip service to security, and I learnt in general that the CISO always became the patsy who got blamed, or worse, fired, for everything. Yet this experience gave me both insight and credibility on the management side of running security for large companies. It was now time to shift my focus again.

Around this time, I started researching and writing technical white papers and publishing them for the cybersecurity community. I also trained and taught aspects of cybersecurity through Royal Holloway and industry associations, and I started speaking at conferences and to various industry groups. I found through these speaking engagements that attendees wanted to know more about cybersecurity trends and prevention and what’s really going on. It was sometimes shocking at times to learn how naïve the average individual is. But then again, my normal isn’t their normal and my every day isn’t theirs. Junior technical attendees and experienced managers from other fields would come up and ask me questions about how to enter the industry, which is how I came up with the idea for this book.

With breach incidents on the rise, I left IBM for Foundstone, which meant working for George Kurtz and Stu McClure, two internationally recognized cybersecurity experts. Foundstone was famous for many things, including the HackingExposed book series. Now owned by McAfee, Foundstone was one of those “Holy Grail” companies any aspiring cybersecurity specialist wanted to work for because of their focus on technical excellence.

At Foundstone, I was quickly promoted to run the professional services division. Being a part of that team made me proud because it allowed me to fulfill a technical dream working with people of high technical caliber. It was like being part of a specialist team and going from the regular army to a special operations team, so I was happy to work with technical leaders in the space.

In the early days, internet security was more of an art than a science because it was about testing to see what might happen while hoping nothing would happen. Twenty years ago, when everybody was building e-commerce systems, there was very little testing done. People were just throwing together e-commerce websites and applications so they could be up and running as fast as possible. Security was just an after-thought, so customers would transact without it. Because no one knew about it except hackers, no formal penetration testing was done. It wasn’t a science then— it was more an art because we didn’t know what might happen.

Since then, we’ve done a significant amount of testing in various environments and with different tools. The field has become more of a science because we now know what to expect out of certain tests and protocols.

At Foundstone, we hacked into systems to test their security, and we also helped companies perform incident response and forensics, which means helping them understand how the breaches happened in the first place. During that time, I became increasingly interested in forensics. While I had started in offensive mode— penetration testing—I realized that companies really didn’t know what they were doing. While they knew breaches were an issue, they still had their head in the sand regarding cybersecurity because they typically only took care of the minimum requirements. I became frustrated at telling clients how to fix things, only to have them do some, or half of what I’d recommended. I’d come back later to do the next test, and they would say that they hadn’t done any security upgrades. In between the two tests, they would have added new website functionalities, which only increased their vulnerability. “Secure by design” just wasn’t a thought back then.

The general public also didn’t understand the security implications of emailing or buying online or sharing their

personal experiences on social media—that can all result in personal information being compromised. Online buyers believed everything was secure by default, while companies cared more about functionality and the aesthetic appeal of their website and applications than about making them secure—they didn’t understand security. It was more about luring customers and making revenue than ensuring the customer’s online activity was secure.

Things have obviously evolved since I started in the industry, but it’s still irritating to see people and companies compromised for the stupidest things. For most companies, the marketing ethos is: get it done fast and make it look sexy. The push for rapid launches only exacerbates other mistakes, making those websites even more attractive to attackers.


Mistakes Happen

Although I cut my teeth on penetration testing, I (along with most colleagues) made mistakes in the process. I remember that one of my teams was testing the security for a company that built and launched satellites—a very expensive product—and had scheduled a launch on a weekend. During the test, we caused an issue that forced them to abort the test launch. It was costly for that company, but a learning experience for us.

On another one of my early assignments, my team and I were asked to test a new financial system that transacted approximately $1.5 trillion a day. Because we had to test in a live environment, the client was paranoid about the downtime. As part of the hack, we managed to enumerate a list of all the users on the system, and we then started a dictionary attack against it to determine if they had weak or common passwords on the system. As luck would have it, the whole system went down in what we call a DoS (denial of service) attack. I had locked everyone out of their accounts because the client had implemented account lockouts after three attempts.

There I was in the server room typing away, oblivious to all.

Why is everybody running around like headless chickens? I wondered, when all the while, I was the culprit. In my defense, I had previously asked the client lead if they had implemented such security measures, and he had said no; if they’d told me the truth, I wouldn’t have tested this abuse case. In the end, that didn’t matter—I had just stopped a system from transacting for about two and half hours. The client was pissed. It took thirty minutes to determine that it was actually an issue, another sixty to triage it, and then over an hour to give everybody their new accounts and reset their passwords and systems. It’s not like switching a light on and off. I had proven the risk up front that the system was insecure and vulnerable to a DoS attack. But even scarier—I don’t think the client saw it that way because he wasn’t prepared for the real results.

Note that most penetration tests do not happen in a live environment. If you don’t stay cool in these testing situations, things that shouldn’t happen or that you least expect to happen actually might. Yet without that test, attackers can easily cause significant havoc that can later take even more significant resources to resolve. And when you do make a mistake, and proper quality assurance protocols have been followed, that’s “okay” as long as you learn from it and don’t make that same mistake twice.

The breach side - understanding clients

After breaking into multiple systems and seeing how easy it is to do so, you start to see certain patterns when you’re defending. Using what I learned from penetration testing, I could see how I might break into a company or even a government entity to help cross-correlate the forensic evidence attackers left behind. This was new territory for me. I wanted to see what happens on the other side once a company is infiltrated. If I was going to help clients recover from breaches, I also needed to be able to run a security organization to understand how customers felt when I told them they were vulnerable. This is how I transitioned into Incident Response.

At Foundstone, my first major IR (Incident Response) engagement was with a strategic government think tank that called us with what they thought was a simple computer issue. Their servers kept rebooting when they applied Microsoft security patches. We found that a foreign nation-state had broken into their systems to access confidential research. Curiously, every time the researchers in the organization printed anything (emails, reports etc.) to their local printers on site, a duplicate document printed to another printer server in a foreign country. Innovative, I thought. So, we first had to determine what the problem was along with its context, the number of compromised user accounts, and any trace of malice left in the environment. Then, we made a containment plan to fix it all at once.

I left Foundstone to join a startup called FireEye which eventually acquired Mandiant, a company well known for its IR practice. We worked on a lot of IR cases and built an international team that, within two years, grew to over eighty people and now spans twenty countries.

From the first cyber-warfare in the commercial sector to Sony’s breach and everything in between, I’ve been in the trenches as well as the boardroom. I’ve worked hard, learned a lot, and haven’t been bored. My curiosity of what’s next has ultimately carved and progressed my career. Alongside sharpening my technical skills, I also have developed my management style to select, build, and retain high-quality teams and handle clients. There are many books on theory and organization behavior; I won’t bore you with the different types of management styles people use. I’ll share what’s worked for me.

My management ethos

I’ve honed my style of management working in mature organizations, major cybersecurity firms, startups, and smaller companies. The pool for cybersecurity experts is already quite small, and building a team of high-quality talents who are both intrinsically motivated and technically competent means I must tailor my leadership style to work with each of them. From my days at school till now, one of my core principles that underpins my management philosophy has always been to learn from, and apply the best qualities of, every person I work with.

Whether it’s giving one-on-one constructive feedback (not “shaming and naming” in a group), rolling up my sleeves to jump in and get things done rather than ordering commands, having my consultants’ backs and not throwing them under the bus in front of clients if something goes wrong, or reducing the amount of office politics, I try to acquire and use positive behavior and be a good example to others around me. It’s all about the team and not the individual. We all have positive and negative qualities, and I’ve learned to focus on someone’s best qualities and make sure each team member’s strengths fills a gap and are congruent within the team.

The second core principle I try to instill in the managers I work with is that we work for the consultants and our employees. We don’t work in a traditional pyramid where I’m the general who sits at the top and my underlings do everything I say. That’s normally what you get in mature organizations where politics sometimes trumps employee well-being and development. Instead, I develop an inverse pyramid where I believe our job as the manager is to make sure consultants and employees have all the tools they need and that they’re well looked after. When this type of setup is done well, it infuses loyalty and becomes part of the organization’s mission and culture.

This leads me to my third core principle: I have a flat management style, with no hierarchy in the teams I build. There are no sacred cows or politicking like you might see in other industries. Anyone—from a first-year graduate to a brilliant expert—can walk in to my office and tell me I’m an idiot and I’m doing things wrong. Then, it’s up to me to justify my reaction and my decision.

Cybersecurity moves in such a fast environment that I have to quickly assess what someone on the front line defending against attackers is saying before I make what must be a swift decision on how to proceed. A siloed management structure, or one with multiple barriers, doesn’t work in our environment. An open communication channel can disseminate information to all the consultants so they can react quickly. This makes consultants feel safe to do their work or come to me with any issues. In cyber security, information is not power; it is an asset that must be shared, so I share everything unless it’s something strategic or confidential. When we all communicate clearly and transparently, everyone should understand the decision-making process. We can, therefore, focus on our collective mission, whether it’s facing the enemy and dealing with an attack or facing the client and ensuring we do a great job. Where there’s transparent leadership, everyone can succeed.

Lastly, I believe that you shouldn’t be afraid that people may be more intelligent than you. Someone on my team who is smarter than me doesn’t diminish my value, but rather, makes them someone I can learn from. And I want to hire and learn from the best. So, I tell consultants and teams I lead to always try to hire people who are smarter than they are.


It’s all about building high quality teams

This work requires nurturing and building teams and supporting them through their career progression. One of my core responsibilities is hiring and building teams where there aren’t any. First, I find the talent—from referrals and networking —and recruit them to join me in my mission. Once they’re hired, the fresh pool of talent needs to become melded into one team, one culture, one philosophy of thinking. It all gets done very quickly. Then, they’re technically enabled to do their job, no matter where they live.

The majority of new hires work from their home country. Recent college graduates are encouraged to visit the office environment, especially if they’ve had little corporate experience. New hires with at least five years of experience are more likely to start working remotely, but that depends in part on their maturity level. They must demonstrate their raw baseline talent before receiving the intensive methodology, tooling, and culture training that will enable them to work as one team from anywhere.

When you hire people from all over the world with different philosophies and from different educational systems, you sometimes have to break down cultural barriers so they can work together. If I hire someone in Australia and another in Norway, they have to be able to work cohesively while under pressure. Does managing different people in different time zones get challenging? Yes, it does. It’s challenging, and someone always loses out. Either the Australians will lose out or the Norwegians will. And as the distance between a head office and the geographic location of a remote worker grows, the communication level dramatically degrades, and people from both remote locations will, at some point, feel cut off from the mothership. One of the keys to successfully managing a team is making yourself available to them.

My team-building methodology is usually structured, but because I’m in a fast-paced startup environment, something occasionally slips through the cracks. It’s not like working for an IBM where there are multiple bodies, departments and a large pool of resources. When I’m switching hats between setting up teams, proving results, and training new hires, a lot of things happen at once.

Our current industry environment makes it fairly easy to find “a seat on the bus.” If you’re technically strong but don’t have soft skills, you’ll have an easier time than if you excel at soft skills but have little to no technical talent. If it’s the latter, you’ll still find a seat, but there will be fewer choices available. This is an important distinction to make because in a dream scenario, every team member is well-rounded and possesses all the skillsets, especially on the consulting side, which involves dealing directly with clients. Who wouldn’t want everyone they hire to have strong technical skills, be great with clients, perform detailed quality assurances, be able to travel, and be available to work no matter the time zone? I know I would. But I’ve found in building teams that the junior members are highly technical and have less management and soft skills, so they need nurturing and training in those areas. It’s also easier to build on their raw technical talent and train them into management roles than it is to turn those with strong soft skills into technical specialists.

It’s all about client management

The way I work with my clients typically hinges on the type of engagements I am contracted to provide (within consulting). Depending on the use case, whether the contract is for proactive testing or responding to a client breach, I have two different approaches to managing how clients react in each of these situations.

In proactive testing, we assess the client’s problems and build a solution to address their needs. It’s similar to one–on–one consulting where you need to further qualify and understand their issues, whether it’s testing their website or network. From a strategic standpoint, I need to understand the client’s situational awareness and environment (technology, structure, organization, politics), and craft a solution and an engagement that makes sense for them. To do this, I have to be transparent about what can and cannot be one while managing expectations, and fully explain the risks and possible downtime. Meanwhile, the client must be fully aware of the expectations and that, if mistakes do happen, we’ll own them and work quickly to find a resolution.

Client dynamics differ in a response engagement based on where they are in the process. For example, if their systems are down for three days before they call us, it means we’re walking into highly stressful conditions and we have to manage the client and their environment very carefully. We have to be ready to potentially tell the client, “You’re doing this wrong,” even if we ultimately do what they tell us to do despite our recommendation. In other situations, we must tell a client they’re doing something the wrong way and give them the choice to either carry on without us or spend money with us to resolve things correctly. In such cases, we know they won’t be successful, because we’ve seen so many other clients make the same mistakes.

And so, if they choose not to fix the issue, or choose to simply implement a partial solution that still leaves them vulnerable, we have to be prepared to leave money on the table and walk away. Some clients respect our straight talk and see quickly that we care about what we do. Others choose their own path and carry on, but after running like a hamster on a wheel for another day or so and getting nowhere, they end up calling us back. Telling a client up-front what we can and cannot do is key to IR client management.

Managing client expectations is crucial to our success. Incident Response is much like the ER where a patient comes in asking how long it will take before they can go home, but doctors can’t answer until they’ve run battery of tests before and after patching them up. The patient might know how initial signs of a heart attack, but the diagnosis later shows signs of kidney failure or a brain tumor. So, when we tell clients a fix will take days or months, they don’t expect to get a final report immediately. And if attackers are still in the system while we’re working in it— this happens often in nation-state cases—and they try to hide their tracks because they can see that we’re already on to them, we let the client know what’s going on.