Creating a Good Security Conference CFP Submission

So you’re interested in submitting a talk for a security conference? Awesome! Above all else, what keeps our industry moving forward is the free and open sharing of information. Submitting a talk can be a scary experience, and the process for how talks are evaluated can feel mysterious.

So what’s the best way to create a good security conference CFP submission?

It’s perhaps best to consider the questions that the review board will ask themselves as they review the submissions:

  • Will this presentation be of broad interest (i.e.: 10-20% of attendees?)
  • Is this presentation innovative and useful?
  • Is this presentation likely to be able to deliver on its outline and promises?
  • Is this presentation consistent with the values of the conference?

They are also likely going to review submissions in an extremely basic web application or Excel.

image

See that scroll bar on the right? It’s tiny. For DerbyCon this year, the CFP board reviewed ~ 500 submissions. That’s a lot of work, but it’s also an incredible honour. It’s like going to an all-you-can eat buffet prepared by some of the top chefs in the world. But that buffet is 3 miles long. It’s overwhelming, but in a good way 🙂

Let’s talk about how you can create a good CFP submission based on the questions reviewers are asking themselves.

Review Board Criteria

Will this presentation be of broad interest?

If a conference is split into 5 tracks, accepted presentations must be interesting to about 20% of the attendees. If your talk is too specialized - such as the latest advances in super-theoretical quantum cryptography - you might find yourself talking to an audience of 4.

A common problem in this category is vendor-specific talks. Unless the technology is incredibly common, it will just come across as a sales pitch. And nobody wants to see a sales pitch.

That said, some talks are of broad interest exactly because they are so far outside people’s day-to-day experience. While an attendee may never have the opportunity to experience the lifestyle of a spy, an expose into the life of one would most certainly have popular appeal.

Is this presentation innovative and useful?

The security industry is incredible at sharing information. For example, @irongeek records and shares thousands of videos from DerbyCon, various BSsides, and more. DEF CON has been sharing videos for the last couple of years, as has Black Hat and Microsoft’s Blue Hat. If an audience member is interested in a topic, there’s a good chance they’ve already watched something about it through one of these channels. In your CFP submission, demonstrate that your presentation is innovative or useful.

  • Does it advance or significantly extend the current state of the art?
  • Does it distill the battle scars from attempting something in the real world (i.e.: a large company, team, or product?)
  • If it’s a 101 / overview type presentation, does it cover the topic well?

Is this presentation likely to be able to deliver on its outline and promises?

Presentation submissions frequently promise far more than what they can accomplish. For example:

  • Content outlines that could never be successfully delivered in the time slot allotted for a presentation.
  • Descriptions of research that is in progress that the presenter hopes will bear fruit. Or worse, research that hasn’t even started.
  • Exaggerated claims or scope that will disappoint the audience when the actual methods are explained.

Is this presentation consistent with the values of the conference?

Some presentations are racist, sexist, or likely to offend attendees. This might not be obvious at first, but slang you use amongst your friends or coworkers can come across much differently to an audience. These ones are easy to weed out.

Many conferences aim to foster a healthy relationship between various aspects of the community (i.e.: researchers, defenders, vendors,) so a talk that is overly negative or discloses an unpatched vulnerability in a product is likely not going to be one that the conference wants to encourage.

On the other hand, some conferences actively cater to the InfoSec tropes of edgy attackers vs defenders and vendors. You might find an otherwise high-quality Blue Team talk rejected from one of those.

Some submissions may appear to skate a fine line on this question, so good ones are explicit about how they will address this concern. For example, mentioning that the vulnerability they are presenting has been disclosed in coordination with the vendor and will be patched by the time the presentation is given.

Common Mistakes

Those are some of the major thoughts going through a reviewer’s mind as they review the conference submissions. Here are a couple of common mistakes that make it hard for a reviewer to judge submissions.

  • Is the talk outline short? If so, the reviewer probably doesn’t have enough information to evaluate how well the presentation addresses the four main questions from above. A good outline is usually between 150 to 500 words. See talks 3, 4, and 5 from the screen shot above to see how this looks in practice!
  • Does the title, description or outline rely heavily on clichés? If so, the presentation is likely going to lack originality or quality – even if it is for fun and profit.
  • Is the talk overly introspective? Talks that focus heavily on the presenter (“My journey to …”) are hard to get right, since attendees usually need to be familiar with the presenter in order for for the talk to have an impact. Many review processes are blind (reviewers don’t know who submitted the session), so this kind of talk is almost impossible to judge.
  • Is the talk a minor variation of another talk? Some presenters submit essentially the same talk, but under two or three names or variations. What primarily drives a reviewer’s decision of a talk is the meat, not a bit of subtle word play in the title. They will likely recognize the multiple variations of the talk and select only one – but which one specifically is unpredictable. When votes are tallied, three talks with one vote each are much less likely to be selected than a single talk with three votes.
  • Is the submission rife with grammar and spelling errors? I don’t personally pay much attention to this category of mistake, but many reviewers do. If you haven’t spent the effort running your submission through spelling and grammar check, how much effort will you spend on the talk itself?

XOR is Not as Fancy as Malware Authors Think

FireEye recently posted some research about an attack leveraging the NetSupport Remote Access tool. The first stage of this attack uses a lot of obfuscation tricks to try to make reverse engineering more difficult.

David Ledbetter and I were chatting about some of the lengths the malware authors went through to obfuscate the content.

One of the major sources of complication is a complicated, iterative XOR:

(Image credit FireEye)

Unlike most malware that obfuscates content by XORing the content with a single byte / key, this malware appears to do something much more clever. See the content starting at ‘var tmpKeyLength = 1;’?

  1. XOR each character of the content with the first byte of the encryption key
  2. XOR characters of the content with bytes from the encryption key in the following pattern: 1, 2, 1, 2, 1, 2, 1, 2, …
  3. XOR characters of the content with bytes from the encryption key in the following pattern: 1, 2, 3, 1, 2, 3, 1, 2, 3, …

When malware uses step #1 alone -- or even a repeating single-key XOR -- I like to call it “Encraption”. It appears complicated, but is vulnerable to many forms of cryptanalysis and can be easily broken. Given that this malware did several levels of Encraption, did they manage to finally invent something more secure than a basic repeating key XOR?

Not even close.

XOR is Associative

One of the biggest challenges with using XOR in cryptography is that it is associative: you can rearrange parenthesis without impacting the final result. For example, consider again a single byte key and the following double XOR encryption:

  1. Take the content
  2. XOR each character by the value ‘123’
  3. XOR each character by the value ‘321’
  4. Emit the result

If we were to add parenthesis to describe the order of operations:

(Content XOR 123) XOR 321

Because XOR is associative, you can rearrange parenthesis (order of operations) to make it:

Content XOR (123 XOR 321)

Which gives 314:

image

So, encraption with two keys is still just encraption with a single key and is vulnerable to all of the same attacks.

But what about that rolling key?

The malware above used something more like a rolling key, however. It didn’t take a couple of single bytes. If the content was 100 bytes, it did 100 rounds of XOR based on the key. Surely that must be secure.

Fortunately? Unfortunately? The answer is no. If we remember the malware’s algorithm:

    1. XOR each character of the content with the first byte of the encryption key
    2. XOR characters of the content with bytes from the encryption key in the following pattern: 1, 2, 1, 2, 1, 2, 1, 2, …
    3. XOR characters of the content with bytes from the encryption key in the following pattern: 1, 2, 3, 1, 2, 3, 1, 2, 3, …

Consider the perspective of a single character. It gets encrapted by one byte of the key, and then a different byte of the key, and then a different byte of the key… and so on. And because XOR is associative, as we demonstrated above, that is the same thing as the single character being encrapted by a single byte.

A PowerShell Demonstration

Let’s take a look at a demonstration of this in PowerShell.

First, let’s look at a faithful re-implementation of the original algorithm:

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017

$stringToEncrypt = [char[]] "Hello World!"
$encrypted = $stringToEncrypt.Clone()
$key = 97,4,13,252,119,31,208,156,196,56

$tmpKeyLength = 1
while($tmpKeyLength -le $key.Length)
{
    $tmpKey = $key[0..$tmpKeyLength]
    for($i = 0; $i -lt $encrypted.Length; $i++)
    {
        $encrypted[$i] = $encrypted[$i] -bxor $tmpKey[$i % $tmpKey.Length]
    }
    $tmpKeyLength++
}

"Encrypted:"
([byte[]]$encrypted) | Format-Hex | Out-String

When you take a look at the result, here’s what you get:

image

Pretty impressive! Look at all those non-ASCII characters. This must be unbreakable!

To get the equivalent single-pass encraption key, we can just XOR the encrapted string with the original string. How?

XOR is Commutative

We can do this because XOR is commutative as well: you can rearrange the order of terms without impacting the result.

If Encrapted is:

Content XOR Key1 XOR Key2 XOR Key3

then we do:

Encrapted XOR Content

then we get:

Content XOR Key1 XOR Key2 XOR Key3 XOR Content

Because XOR is commutative, we can rearrange terms to get:

Content XOR Content XOR Key1 XOR Key2 XOR Key3

Anything XOR’d with itself can be ignored

One of the reasons XOR encraption works is that anything XOR’d with itself can be ignored. For example:

Encrypted = Content XOR Key

Decrypted = Encrypted XOR Key

By XORing some content with a key twice, you get back the original content. So back to where we got with the last section, if we XOR the final result with the original content and rearrange, we get:

Content XOR Content XOR Key1 XOR Key2 XOR Key3

That gives us an equivalent single key that we can use: Key1 XOR Key2 XOR Key3.

Here’s an example of figuring out this new single-pass key:

001
002
003
004
005
006
007
008

$newKey = New-Object 'char[]' $stringToEncrypt.Length
for($i = 0; $i -lt $stringToEncrypt.Length; $i++)
{
    $newKey[$i] = $encrypted[$i] -bxor $stringToEncrypt[$i] 
}

""
"Equivalent key: " + (([int[]]$newKey) -join ",")

And the result:

image

And to prove they give equivalent results:

001
002
003
004
005
006
007
008
009

$encrypted = $stringToEncrypt.Clone()
for($i = 0; $i -lt $encrypted.Length; $i++)
{
    $encrypted[$i] = $encrypted[$i] -bxor $newKey[$i]
}

""
"Easy encrypted:"
([byte[]]$encrypted) | Format-Hex | Out-String

image

The Nelson Moment

This is where we get to point and laugh a little. Remember how the malware repeatedly XORs the content with various bits of the key? It did far more damage than the malware author realized. Let’s consider the character in the content at position 0 for a moment:

    1. XOR with byte 0 of the key
    2. XOR with byte 0 of the key (thereby stripping the encraption altogether!)
    3. XOR with byte 0 of the key
    4. XOR with byte 0 of the key (thereby stripping the encraption altogether!)

Let’s consider byte 1 of the content

    1. XOR with byte 0 of the key
    2. XOR with byte 1 of the key
    3. XOR with byte 1 of the key (thereby stripping the work done in step 2)
    4. XOR with byte 1 of the key
    5. XOR with byte 1 of the key (thereby stripping the work done in step 4)

Depending on the length of the key and the content, this pattern of alternating doing work and removing the work that was previously done continues. This now takes what could have been a potentially strong key of 100s of bytes down to a super expensive way to compute single key!! Here’s a demo of encrapting some GUIDs:

image

Notice that some characters (28 at the beginning, c9 at offset 0x48) didn’t even get encrypted at all?

Remember folks, don’t roll your own crypto, and especially don’t roll your own crapto.

Part-of-Speech Tagging with PowerShell

When analyzing text, a common goal is to identify the parts of speech within that text – what parts are nouns? Adjectives? Verbs in their gerund form?

To accomplish this goal, the area of natural language processing in Computer Science has developed systems for Part of Speech tagging, or “POS Tagging”. The acronym preceded the version in Urban Dictionary 🙂

The version I used in University was a Perl-based Brill Tagger, but things have advanced quite a bit – and the Stanford NLP group has done a great job implementing a Java version with C# wrappers here:

https://nlp.stanford.edu/software/tagger.shtml

The default English model is 97% correct on known words, and 90% correct on unknown words. “SpeechTagger” is a PowerShell interface to this tagger

image

By default, Split-PartOfSpeech outputs objects that represent words and the part of speech associated with them. The TaggerModel parameter lets you specify an alternate tagger model: the Stanford Part of Speech Tagger supports:

  • Arabic
  • Chinese
  • English
  • French
  • German
  • Spanish

The –Raw parameter emits sentence in the common text-based format for part-of-speech tagging, separating the word and its part of speech with the ‘/’ character. This is sometimes useful for regular expressions, or for adapting code you might have previously written to consume other part-of-speech taggers.

To install this project, simply run the following command from PowerShell:

Install-Module –Name SpeechTagger

Automatic Word Clustering: K-Means Clustering for Words

K-Means clustering is a popular technique to find clusters of data based only on the data itself. This is most commonly applied to data that you can somehow describe as a series of numbers.

convergence

When you can describe the data points as a series of numbers, K-Means clustering (Lloyd’s Algorithm) takes the following steps:

  1. Randomly pick a set of group representatives. Lloyd’s algorithm generally picks random coordinates, although sometimes picks specific random data points.
  2. Assign all of the items to the nearest representative.
  3. Update the group representative to more accurately represent its members. In Lloyd’s algorithm, this means updating the location of the representative to represent the average location of every item assigned to it.
  4. Revisit all of the items, assigning them to their nearest group representative.
  5. If any items shifted groups, repeat steps 3-5.

Applying this technique directly to words is not possible, as words don’t have coordinates. Because of that:

  • Randomly picking a coordinate cannot be used to randomly create a group representative.
  • Refining a group representative based on its current word cluster is more complicated than simply averaging the coordinates of the items in the cluster.

If we follow the philosophy of Lloyd’s algorithm, however, we can still create a version of K-Means Clustering for Words. In our implementation, we:

  1. Pick random words from the provided list as group representatives.
  2. Use Levenshtein Distance (string similarity) to measure "nearest group representative".
  3. Use word averaging to update the nearest group representative. Word averaging is a new word of the "average" word length, with characters at each position created by taking the most common letter at that position.

This is very computationally expensive for large data sets, but can provide some very reasonable clustering for small data sets.

To explore this further, you can download Get-WordCluster from the PowerShell Gallery. It’s as simple as:

  1. Install-Script Get-WordCluster –Scope CurrentUser
  2. (-split "Hello world Hell word SomethingElse") | Get-WordCluster -Count 3

Easily Search for Vanity Ham Call Signs

When you first get your ham radio license, the FCC gives you a random call sign based on your location and roughly your date of application. The resulting call sign is usually pretty impersonal, but the FCC lets you apply for a “vanity” call sign for free.

While the rules for these vanity call signs change depending on your license class (Technician, General, Extra), most of the good (shorter) vanity call signs that fall under the “extra” rules are taken. So realistically, your options will be likely be a 2x3 callsign (group C or group D).

image

One place to look for available call signs is at http://callsign.ualr.edu. Please be kind and don’t pound their server! If you want to do hundreds / thousands of lookups, you can download the FCC database directly.

But if you only want to look at tens / hundreds, here’s a PowerShell script to help out.

001
002
003
004
005
006
007
008
009
010
$final3 = "LEE"
$prefixes = "WX","KZ"

foreach($prefix in $prefixes) {
    foreach($number in 0..9) {
        $call = $prefix + $number + $final3
        $wr = Invoke-WebRequest http://callsign.ualr.edu/detail.php -Method Post -Body @{ call = $call }
        if($wr.Content -match "no records were found!") { $call }
    }
}

Searching for Content in Base-64 Strings

You might have run into situations in the past where you’re looking for some specific text or binary sequence, but that content is encoded with Base-64. Base-64 is an incredibly common encoding format in malware, and all kinds of binary obfuscation tools alike.

The basic idea behind Base-64 is that it takes arbitrary binary data and encodes it into 64 (naturally) ASCII characters that can be transmitted safely over any normal transmission channel. Wikipedia goes into the full details here: https://en.wikipedia.org/wiki/Base64.

Some tooling supports decoding of Base-64 automatically, but that requires some pretty detailed knowledge of where the Base-64 starts and stops.

The Problem

Pretend you’re looking for the string, “Hello World” in a log file or SIEM system, but you know that it’s been Base-64 encoded. You might use PowerShell’s handy Base-64 classes to tell you what to search for:

image

That seems useful. But what if “Hello World” is in the middle of a longer string? Can you still use ‘SGVobG8gV29fbGQ=’? It turns out, no. Adding a single character to the beginning changes almost everything:

image

Now, we’ve got ‘IEhlbGxvIFdvcmxk’.

The main problem here is the way that Base-64 works. When we’re encoding characters, Base-64 takes 3 characters (24 bits) and re-interprets them as 4 segments of 6 bits each. It then encodes each of those 6 bits into the 64 characters that you know and love. Here’s a graphical example from Wikipedia:

image

So when we add a character to the beginning, we shift the whole bit pattern to the right and change the encoding of everything that follows!

Another feature of Base-64 is padding. If your content isn’t evenly divisible by 24 bits, Base-64 encoding will pad the remainder with null bytes. It will use the “=” character to denote how many extra padding blocks were used:

image

When final padding is added, you can’t just remove those "=” characters. If additional content is added to the end of your string (i.e.: “Hello World!”), that additional content will influence both the padding bytes, as well as the character before them.

Another major challenge is when the content is Unicode rather than ASCII. All of these points still apply – but the bit patterns change. Unicode usually represents characters as two bytes (16 bits). This is why the Base-64 encoding of Unicode content representing ASCII text has so many of the ‘A’ character: that is the Base-64 representation of a NULL byte.

image

The Solution

When you need to search for content that’s been Base-64 encoded, then, the solution is to generate the text at all possible three-byte offsets, and remove the characters that might be influenced by the context: content either preceding what you are looking for, or the content that follows. Additionally, you should do this for both the ASCII as well as Unicode representations of the string.

An Example

One example of Base-64 content is in PowerShell’s –EncodedCommand parameter. This shows up in Windows Event Logs if you have command-line logging enabled (and of course shows up directly if you have PowerShell logging enabled).

Here’s an example of an event log like that:

clip_image002

Here’s an example of launching a bunch of PowerShell instances with the –EncodedCommand parameter, as well as the magical Get-Base64RegularExpression command. That command will generate a regular expression that you can use to match against that content:

clip_image001

As you can see in this example, searching for the Base-64 content of “My voice is my” returned all four log entries, while the “My voice is my passport” search returned the single event log that contained the whole expression.

The Script

Get-Base64RegularExpression is a pretty simple script. You can use this in PowerShell, or any event log system that supports basic regular expression searches.

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060

## Get-Base64RegularExpression.ps1
## Get a regular expression that can be used to search for content that has been
## Base-64 encoded

param(
    ## The value that we would like to search for in Base64 encoded content
    [Parameter(Mandatory)]
    $Value
)

## Holds the various byte representations of what we're searching for
$byteRepresentations = @()

## If we got a string, look for the Unicode and ASCII representations of the string
if($Value -is [String])
{
    $byteRepresentations += 
        [System.Text.Encoding]::Unicode.GetBytes($Value),
        [System.Text.Encoding]::ASCII.GetBytes($Value)
}

## If it was a byte array directly, look for the byte representations
if($Value -is [byte[]])
{
    $byteRepresentations += ,$Value
}

## Find the safe searchable sequences for each Base64 representation of input bytes
$base64sequences = foreach($bytes in $byteRepresentations)
{
    ## Offset 0. Sits on a 3-byte boundary so we can trust the leading characters.
    $offset0 = [Convert]::ToBase64String($bytes)

    ## Offset 1. Has one byte from preceeding content, so we need to throw away the
    ## first 2 leading characters
    $offset1 = [Convert]::ToBase64String( (New-Object 'Byte[]' 1) + $bytes ).Substring(2)

    ## Offset 2. Has two bytes from preceeding content, so we need to throw away the
    ## first 4 leading characters
    $offset2 = [Convert]::ToBase64String( (New-Object 'Byte[]' 2) + $bytes ).Substring(4)

    ## If there is any terminating padding, we must remove the characters mixed with that padding. That
    ## ends up being the number of equals signs, plus one.
    $base64matches = $offset0,$offset1,$offset2 | % {
        if($_ -match '(=+)$')
        {
            $_.Substring(0, $_.Length - ($matches[0].Length + 1))
        }
        else
        {
            $_
        }
    }

    $base64matches | ? { $_ }
}

## Output a regular expression for these sequences
"(" + (($base64sequences | Sort-Object -Unique) -join "|") + ")"

TripleAgent: Even Zeroer-Tay Code Injection and Persistence Technique

Overview

We'd like to introduce a new Zero-Tay technique for injecting code and maintaining persistency against common advanced attacker toolkits dubbed TripleAgent. We discovered this by ourselves in our very advanced labs, and are in the process of registering a new vanity domain as we speak.

TripleAgent can exploit:

  • Every toolkit version
  • Every toolkit architecture (x86 and x64)
  • Every toolkit user (RED / PURPLE / APT / NATION STATE / etc.)
  • Every toolkit process (including PoC, GTFO, PoC||GTFO, METASPLOIT, UNICORN)

TripleAgent exploits a fundamental flaw in the design of commonly used advanced attacker toolkits, and therefore cannot be patched.

Code Injection

TripleAgent gives the defender the ability to inject any DLL into any attacker toolkit. The code injection occurs extremely early during the victim's process boot, giving the defender full control over the process and no way for the process to protect itself. The code injection technique is so unique that it's not detected or blocked by even the most advanced threaty threats.

Attack Vectors

  • Attacking persistence toolkits - Taking full control of ANY persistence toolkit by injecting code into it while bypassing all of its self-protection mechanisms. The attack has been verified and works on all bleeding-edge attacker toolkits including but not limited to: DoubleAgent.

Technical Deep, Deep, Deep, Dive

An example of an advanced attacker toolkit is known as DoubleAgent. This attacker toolkit exploits a fundamental issue in Windows, nay computing, NAY HUMANITY itself.

When this advanced toolkit runs, it is widely acknowledged to provide complete control over other unwitting applications. However, we can apply our new TripleAgent framework to this toolkit to completely neutralize it. Rather than have it infect target systems, we can write a few simple lines of code to make it instead launch the Windows Update settings dialog!

static BOOL main_DllMainProcessAttach(VOID)
{
    PROCESS_Create(L"c:\\windows\\system32\\cmd.exe"L"/c start ms-settings:windowsupdate");

 

    return TRUE;
}

 

Once run, we can see the significant impact of our new zero-tay technique. The first invocation installs our TripleAgent exploit, rendering the advanced "DoubleAgent" threat completely harmless during its second invocation.

Mitigations

Unfortunately, there are no mitigations or bypasses for this extremely advanced defensive technique. We do however offer highly-advanced next generation cyber threat intel cloud machine learning offensive services. Just putting that out there.

Adding a Let’s Encrypt Certificate to an Azure-Hosted Website

If you host your website in Azure, you might be interested in adding SSL support via Let's Encrypt. Azure doesn't offer any functionality to automate this or make it easy, but thankfully there are plenty of useful tools in the PowerShell community to make this easy.

  1. ACMESharp - A PowerShell module to interact with Let's Encrypt.
  2. Azure PowerShell - A set of PowerShell modules to interact with Azure.

What's been missing (until now!) is the glue. So now, here's the glue: Register-LetsEncryptCertificate.ps1.

So the steps:

  1. Install-Module AcmeSharp, Azure, AzureRM.Websites
  2. Install-Script Register-LetsEncryptCertificate.ps1
  3. Register-LetsEncryptCertificate -Domain www.example.com -RegistrationEmail [email protected] -ResourceGroup exampleResourceGroup -WebApp exampleWebApp
  4. Visit https://www.example.com

Done!

 

Why is SeDebugPrivilege enabled in PowerShell?

We sometimes get the question: Why is the SeDebugPrivilege enabled by default in PowerShell?

This is enabled by .NET when PowerShell uses the System.Diagnostics.Process class in .NET, which it does for many reasons. One example is the Get-Process cmdlet. Another example is the method it invokes to get the current process PID for the $pid variable. Any .NET application that uses the System.Diagnostics.Process class also enables this privilege.

 

You can see the .NET code that enables this here:

            NativeMethods.LUID luid = default(NativeMethods.LUID);
            
if (!NativeMethods.LookupPrivilegeValue(null, "SeDebugPrivilege", out luid))
            
{
                
return;
            
}
            
IntPtr zero = IntPtr.Zero;
            
try
            
{
                
if (NativeMethods.OpenProcessToken(new HandleRef(null, NativeMethods.GetCurrentProcess()), 32, out zero))
                
{
                    
NativeMethods.TokenPrivileges tokenPrivileges = new NativeMethods.TokenPrivileges();
                    
tokenPrivileges.PrivilegeCount = 1;
                    
tokenPrivileges.Luid = luid;
                    
tokenPrivileges.Attributes = 2;
                    
NativeMethods.AdjustTokenPrivileges(new HandleRef(null, zero), false, tokenPrivileges, 0, IntPtr.Zero, IntPtr.Zero);
                
}
            
}

https://github.com/dotnet/corefx/blob/master/src/System.Diagnostics.Process/src/System/Diagnostics/ProcessManager.Windows.cs#L129

 

Detecting and Preventing PowerShell Downgrade Attacks

With the advent of PowerShell v5’s awesome new security features, old versions of PowerShell have all of the sudden become much more attractive for attackers and Red Teams.

PowerShell Downgrade Attacks

There are two ways to do this:

Command Line Version Parameter

The simplest technique is: “PowerShell –Version 2 –Command <…>” (or of course any of the –Version abbreviations).

PowerShell.exe itself is just a simple native application that hosts the CLR, and the –Version switch tells PowerShell which version of the PowerShell assemblies to load.

Unfortunately, the PowerShell v5 enhancements did NOT include time travel, so the v2 binaries that were shipped in 2008 did NOT include the code we wrote in 2014.  The 2.0 .NET Framework (which is required for PowerShell’s V2 engine) is not included by default in Win10+, but an attacker or Red Teamer could enable it or install it. Prior to Windows 10, where it is available by default, they could just use it.

 

Hosting Applications Compiled using V2 Reference Assemblies

When somebody compiles a C# application to leverage the PowerShell engine, they link against reference assemblies when they do that. If they link against the PowerShell v2 reference assemblies during development, Windows will use the PowerShell v2 engine (if available) when the application runs. Otherwise, PowerShell's type forwarding will run the application using the currently installed PowerShell engine.

This is what happens when PowerShell Empire's "psinject" module attempts to load PowerShell into another process (such as notepad).

 

Detection and Prevention

You have several options to detect and prevent PowerShell Downgrade Attacks.

Event Log

As a detection mechanism, the “Windows PowerShell” classic event log has event ID 400. This is the “Engine Lifecycle” event, and includes the Engine Version. Here is an example query to find lower versions of the PowerShell engine being loaded:

001
002
003
004
005
006
Get-WinEvent -LogName "Windows PowerShell" |
    Where-Object Id -eq 400 |
    Foreach-Object {
        $version = [Version] ($_.Message -replace '(?s).*EngineVersion=([\d\.]+)*.*','$1')
        if($version -lt ([Version] "5.0")) { $_ }
}

 

AppLocker / File Auditing

When the CLR loads PowerShell assemblies, it will first load the managed assemblies from the GAC (if they are available). It will also load the native images that contain pre-jitted code if the assemblies are NGEN’d (which they are). Here is what loading PowerShell v2 looks like:

These can either be an audit trigger, or can be blocked outright.

Be careful to not be too selective on the directories you monitor, as the CLR can also load assemblies from specific directories. For example, it is possible to use the CLR’s undocumented / unsupported DEVPATH environment variable to force the CLR to use a specified version of the assemblies rather than the GAC’d version. And if you don’t have a GAC’d version to override, PowerShell will do regular LoadLibrary() probing to find one – including its installation directory.

In addition, PowerShell can either be launched as a 32-bit process, or 64-bit process. A 64-bit system will load 64-bit PowerShell by default. A 32-bit system will load 32-bit PowerShell. On a 64-bit system, though, Windows will implicitly change the version of PowerShell that gets launched by looking at the bitness of the launching application: a 32-bit app will load other 32-bit apps. It is also possible for users or applications to do this explicitly by launching PowerShell from the WOW directory: c:\windows\syswow64\windowspowershell\v1.0\powershell.exe.

PS > dir *.dll -rec -ea ig | % FullName | ? { $_ -match 'System\.Management\.Automation\.(ni\.)?dll' }
C:\windows\assembly\GAC_MSIL\System.Management.Automation\1.0.0.0__31bf3856ad364e35\System.Management.Automation.dll
C:\windows\assembly\NativeImages_v2.0.50727_64\System.Management.A#\8b1355a03394301941edcbb9190e165b\System.Management.Automation.ni.dll
C:\windows\assembly\NativeImages_v4.0.30319_32\System.Manaa57fc8cc#\08d9ad8b895949d2a5f247b63b94a9cd\System.Management.Automation.ni.dll
C:\windows\assembly\NativeImages_v4.0.30319_64\System.Manaa57fc8cc#\4072bc1c91e324a1f680e9536b50bad4\System.Management.Automation.ni.dll
C:\windows\Microsoft.NET\assembly\GAC_MSIL\System.Management.Automation\v4.0_3.0.0.0__31bf3856ad364e35\System.Management.Automation.dll

 

If you’re going down the enforcement route via AppLocker or Device Guard path, the most robust solution is to block earlier versions of the PowerShell engine by version. Be sure to block both the native image and MSIL assemblies:

C:\Users\leeholm>powershell -version 2 -noprofile -command "(Get-Item ([PSObject].Assembly.Location)).VersionInfo"

ProductVersion   FileVersion      FileName
--------------   -----------      --------
6.1.7600.16385   6.1.7600.16385   C:\WINDOWS\assembly\GAC_MSIL\System.Management.Automation\1.0.0.0__31bf3856ad364e3...


C:\Users\leeholm>powershell -noprofile -command "(Get-Item ([PSObject].Assembly.Location)).VersionInfo"

ProductVersion   FileVersion      FileName
--------------   -----------      --------
10.0.14986.1000  10.0.14986.1000  C:\WINDOWS\Microsoft.Net\assembly\GAC_MSIL\System.Management.Automation\v4.0_3.0.0...


C:\Users\leeholm>powershell -version 2 -noprofile -command "(Get-Item (Get-Process -id $pid -mo | ? { $_.FileName -match 'System.Management.Automation.ni.dll' } | % { $_.FileName })).VersionInfo"

ProductVersion   FileVersion      FileName
--------------   -----------      --------
6.1.7600.16385   6.1.7600.16385   C:\WINDOWS\assembly\NativeImages_v2.0.50727_64\System.Management.A#\8b1355a0339430...


C:\Users\leeholm>powershell -noprofile -command "(Get-Item (Get-Process -id $pid -mo | ? { $_.FileName -match 'System.Management.Automation.ni.dll' } | % { $_.FileName })).VersionInfo"

ProductVersion   FileVersion      FileName
--------------   -----------      --------
10.0.14986.1000  10.0.14986.1000  C:\WINDOWS\assembly\NativeImages_v4.0.30319_64\System.Manaa57fc8cc#\4072bc1c91e324...