Risk management begins with understanding your assets—where and how valuable they are. A vulnerability in an access database used by 20 employees is not as important as a vulnerability found in your publicly exposed API.

Any risk found goes through a cost/benefit analysis. The possible exposure is compared to the cost to mitigate the risk. If a realised risk will cost you $10,000 but it’ll cost $50,000 to fix, then it is a low priority. It simply doesn’t make sense to fix it right away.

Risk ratings are a necessary part of any risk management strategy.

Simple rating systems help the relationship between security and development teams. Developers need to understand what needs to be fixed, and what doesn’t. Introducing intermediary levels, such as Medium or scales from one to ten can make it harder for developers to plan upcoming work. Medium risks are the ones most likely to sit in limbo because no one knows how important they are.

Be clear on your priorities to avoid confusion. Any risks rated highly should undergo root cause analysis. Find out what led to such a critical risk and how to prevent the same thing from happening again. Over time, the emergency, “hair on fire” vulnerabilities should begin to disappear.

Patch management

Dependencies are inevitable when building complex software systems. Your business operations likely depend on a mixture of in-house applications, open source components, and third-party vendor applications. All require good patch management to stay secure.

The code you write depends on open source libraries and frameworks. When vulnerabilities are found within these frameworks, all applications that use them are also vulnerable. It’s essential to update libraries and frameworks quickly so your applications aren’t getting exploited. The code you write might be bug-free, but what about the code your code depends on?

Third-party vendor products undergo the same testing as yours. This testing could find vulnerabilities that put your users at risk. Patch management processes ensure that patches released by third-party vendors don’t sit for months before being applied to your environment.

Once a vulnerability is announced to the public, attackers will try to exploit it. The longer you wait to patch your systems, the longer attackers have to find your application and take advantage of your sloth.

Patch management processes can be difficult to create, but they don’t have to be complicated. Whenever possible, work within the confines of your developers’ daily workflow.

For example, if an open source library needs to be patched, create a pull request within the repo of the application so all developers will see it. Explain in the pull request what you’re updating and that all they have to do is merge and their job is done. You’ll not only help your application stay secure, but you’ll also generate plenty of goodwill with development teams.

Security assessments 

Security assessments are a necessary part of every security leader’s job.  However, it is important to understand which assessment to use when.

1. Black-box and White-box 

Security assessments can either be white-box or black-box assessments.

White-box testing refers to allowing the tester to see into the inner workings of an application. The tester can see code or system diagrams, allowing them to find problems in the implementation of an application.

Also Read: Despite security risks, IoT offers practical benefits for the business and at home

Black-box testing refers to testing an application with no knowledge of how it works. These tests simulate the vantage point of an attacker who has to learn by exercising the system. This type of testing better illustrates what an attacker would have to accomplish to successfully attack a system.

2. Vulnerability assessments 

Vulnerability assessments are white-box tests designed to reveal as many vulnerabilities as possible within an environment, along with guidance on remediation and priority.

Vulnerability assessments are not the same as penetration tests. They are a comprehensive look at all of your systems with access to the inner workings via code or diagrams.

Vulnerability assessments are the best choice if your organization has a low to medium security maturity. The goal is to find as many vulnerabilities within your environment as possible so you can secure the most critical pieces quickly. Once your environment has been hardened by several of these assessments, you can better take advantage of other assessment types.

3. Penetration tests 

Penetrations tests and vulnerability assessments are sometimes confused, but they are not the same. Vulnerability assessments are white-box examinations of all vulnerabilities within a system. The point is to fix as many problems as possible within a short period of time.

Penetration tests are tightly focused, black-box tests aimed at specific functionality within a system. Penetration tests should be done after you’ve cleaned up the vulnerabilities found and have a reasonable level of confidence in your application. Penetration tests have a specific goal in mind, such as exfiltrating data or gaining admin rights to a server.

For instance, you may perform a vulnerability assessment (or multiple assessments) against your shopping cart functionality to find common configuration and coding errors. Once all of the findings from that first assessment are fixed, you run a penetration test against the shopping cart system to make sure it’s been sufficiently protected. This order of testing ensures that penetration tests are used effectively—finding difficult to detect errors that code scanners won’t find.

4. Red team services 

A red team is a permanent team used to improve the information security posture of a company.
Red teams are not a one-time assessment but are continually testing applications to find vulnerabilities.

Red teams focus on using real-world tactics to attack an organization’s assets. They are made up of highly trained and experienced professionals who think like attackers.

5. Audit 

An audit is not a true security assessment. It measures how well your systems match up to a chosen standard. Even if some vulnerabilities are found during an audit, that isn’t the main purpose.

You can be compliant with a standard and be insecure at the same time. Audits don’t verify security but verify conformance with an interpretation of what security should be. This is an important distinction.

Also Read: The concerns, risks and success factors of any startup

Organisations with good security practices are very likely to be compliant. But compliance, while necessary, should never be confused with security. Juggling various security assessments and audits is no easy feat—you can do it when you understand the purpose behind each assessment and when to use it.

Securing the shifting sands 

The ever-changing risk landscape exposes your company to new and dangerous risks every day. Here are some general principles that’ll help you keep ahead of these risks.

Third parties and vendors 

Third parties have practically become a requirement in today’s connected world. Using third parties, however, brings risks to your business. There are three ways risk increases when using third parties:

  • The risk of data being misused by a third party
  • The risk of poor security practices leaking your data without your knowledge
  • The increased attack surface if the third party application contains vulnerabilities

In today’s environment, you’re only as secure as your weakest vendor. Vet your vendors carefully and make sure you’re comfortable with their security policies before signing a contract with them. Once you hand over your data, it’s too late to worry about security.

Your Attack Surface Changes 

Companies are adopting DevOps practices more as time moves on. These practices encourage dev teams to put code out and get fast feedback wherever possible.

Don’t sacrifice security in exchange for “better, faster, cheaper.” Fast-moving development environments increase the risk of services and applications being deployed without the security team’s knowledge. This is especially true of cloud environments, where developers could create virtual machines and deploy code to them at any time.

Strong policies are needed so admins know what they are allowed to do. Policies preventing the abuse of cloud resources are a good idea, but you have to enforce them. AWS, Azure, and Google Cloud aren’t going to police your admins and developers for you.

Automation can be used to help enforce policies. For example, AWS Lambda can be used to scan file uploads to S3 buckets in AWS. Policies can prevent developers from creating new virtual machines using their accounts. A good rule of thumb is to use a build pipeline to build any infrastructure your applications need, preventing humans from deciding how to build VMs and deploy them.

Also Read: How to bulletproof your business against lawsuits

The security team should be made aware of any new assets and what their purpose is. This can be daunting in a micro-service environment but is absolutely necessary. Know when new services are created and released. Understand what cloud service accounts exist and have a clear process on how to create new ones so they can be monitored. You must proactively look at what is currently running in your environment and respond to anything weird.

The key to keeping up with the security of your systems is to reduce the element of surprise. Build processes that enable security teams to stay up-to-date with new assets. These assets must be tested and deployed according to well-enforced policies. Hold your vendors to the same standards you’d hold yourself. Don’t trust your data with just anyone.

By Miju Han, Director of Product Management at HackerOne

Editor’s note: e27 publishes relevant guest contributions from the community. Share your honest opinions and expert knowledge by submitting your content here.

Join our e27 Telegram group here, or our e27 contributor Facebook page here.

Image Credit : everythingpossible