Archives for the ‘Uncategorized’ Category

Scour: Fast, Personal, Local Content Searches

If you have a large collection of documents (source code or text files), searching them with PowerShell or your favourite code editor can feel like it takes forever.


How is it that we can search the entire content of the Internet in milliseconds, but searching your local files can take minutes or hours?

It turns out that there is a solution built into many products and technologies that can help us: the Apache Lucene project. Lucene forms the search engine backbone for many popular products: Solr, Twitter, ElasticSearch, and at least a few hundred others. Unfortunately for us, however, it comes as a software development kit and not a simple tool that you can use from the command line.

With the introduction of a Scour – a new PowerShell module – that all changes.


Scour is a PowerShell module that lets you search any directory on your computer using the power of the Lucene search engine backend. After you run an initial indexing process, future searches are tens or hundreds of times faster than the searches you are currently used to.

Install Scour

To install Scour in PowerShell, simply run:

Install-Module Scour –Scope CurrentUser

Create an Index

Lucene accomplishes its excellent search performance by analyzing your files and then storing information in an index. To create this index, go to a directory that contains the files you care about and run:


By default, Scour indexes text files (*.txt) and the source files for popular programming languages. If you want it to index additional file types, you can use the –Path parameter.

Scour will first scan the current directory (and subdirectories) to determine how many files it has to index, and then displays a progress bar to let you know how much time is left in the indexing process.


Search your Content

Once you’ve created the index for a directory, use the Search-ScourContent cmdlet:

Search-ScourContent "query”

As mentioned before, Scour leverages the Lucene search engine under the hood, so query should follow the rules of Lucene Search Syntax. This syntax is also described in the about_query_syntax.txt help file included with Scour. Here are a few examples:

  • Search-ScourContent “word1 word2” – Searches for all files that contain word1 or word2
  • Search-ScourContent “word1 AND word2” – Searches for all files that contain both word1 and word2
  • Search-ScourContent word* – Searches for all files that have a word starting with “word”
  • Search-ScourContent word~ – Searches for all files that have a word similar to “word”

By default, Scour returns the files that match your query. You can pipe the results into Select-String, Copy-Item, or anything other scripting you might want to do on these results.


If you start your search from within a specific directory, Scour automatically limits your results to documents in that directory or below.

In addition to the file content, Scour also indexes the document paths. This lets you add path-based restrictions to your searches. For example:

  • Search-ScourContent 'path:william AND following'

If you want to search for specific regular expressions within your files, Scour lets you combine the efficiency of indexed document searches with a line-by-line regular expression match. To do this, add the –RegularExpression parameter to your call.

Here’s an example of finding all documents that have the word “following” in them, and then returning just the lines that match the regular expression “follow.*Cambri.*”.


If you want to restrict your searches to a specific file type (i.e.: *.cs), you can use the –Path parameter to Search-ScourContent.


Creating a Good Security Conference CFP Submission

So you’re interested in submitting a talk for a security conference? Awesome! Above all else, what keeps our industry moving forward is the free and open sharing of information. Submitting a talk can be a scary experience, and the process for how talks are evaluated can feel mysterious.

So what’s the best way to create a good security conference CFP submission?

It’s perhaps best to consider the questions that the review board will ask themselves as they review the submissions:

  • Will this presentation be of broad interest (i.e.: 10-20% of attendees?)
  • Is this presentation innovative and useful?
  • Is this presentation likely to be able to deliver on its outline and promises?
  • Is this presentation consistent with the values of the conference?

They are also likely going to review submissions in an extremely basic web application or Excel.


See that scroll bar on the right? It’s tiny. For DerbyCon this year, the CFP board reviewed ~ 500 submissions. That’s a lot of work, but it’s also an incredible honour. It’s like going to an all-you-can eat buffet prepared by some of the top chefs in the world. But that buffet is 3 miles long. It’s overwhelming, but in a good way 🙂

Let’s talk about how you can create a good CFP submission based on the questions reviewers are asking themselves.

Review Board Criteria

Will this presentation be of broad interest?

If a conference is split into 5 tracks, accepted presentations must be interesting to about 20% of the attendees. If your talk is too specialized - such as the latest advances in super-theoretical quantum cryptography - you might find yourself talking to an audience of 4.

A common problem in this category is vendor-specific talks. Unless the technology is incredibly common, it will just come across as a sales pitch. And nobody wants to see a sales pitch.

That said, some talks are of broad interest exactly because they are so far outside people’s day-to-day experience. While an attendee may never have the opportunity to experience the lifestyle of a spy, an expose into the life of one would most certainly have popular appeal.

Is this presentation innovative and useful?

The security industry is incredible at sharing information. For example, @irongeek records and shares thousands of videos from DerbyCon, various BSsides, and more. DEF CON has been sharing videos for the last couple of years, as has Black Hat and Microsoft’s Blue Hat. If an audience member is interested in a topic, there’s a good chance they’ve already watched something about it through one of these channels. In your CFP submission, demonstrate that your presentation is innovative or useful.

  • Does it advance or significantly extend the current state of the art?
  • Does it distill the battle scars from attempting something in the real world (i.e.: a large company, team, or product?)
  • If it’s a 101 / overview type presentation, does it cover the topic well?

Is this presentation likely to be able to deliver on its outline and promises?

Presentation submissions frequently promise far more than what they can accomplish. For example:

  • Content outlines that could never be successfully delivered in the time slot allotted for a presentation.
  • Descriptions of research that is in progress that the presenter hopes will bear fruit. Or worse, research that hasn’t even started.
  • Exaggerated claims or scope that will disappoint the audience when the actual methods are explained.

Is this presentation consistent with the values of the conference?

Some presentations are racist, sexist, or likely to offend attendees. This might not be obvious at first, but slang you use amongst your friends or coworkers can come across much differently to an audience. These ones are easy to weed out.

Many conferences aim to foster a healthy relationship between various aspects of the community (i.e.: researchers, defenders, vendors,) so a talk that is overly negative or discloses an unpatched vulnerability in a product is likely not going to be one that the conference wants to encourage.

On the other hand, some conferences actively cater to the InfoSec tropes of edgy attackers vs defenders and vendors. You might find an otherwise high-quality Blue Team talk rejected from one of those.

Some submissions may appear to skate a fine line on this question, so good ones are explicit about how they will address this concern. For example, mentioning that the vulnerability they are presenting has been disclosed in coordination with the vendor and will be patched by the time the presentation is given.

Common Mistakes

Those are some of the major thoughts going through a reviewer’s mind as they review the conference submissions. Here are a couple of common mistakes that make it hard for a reviewer to judge submissions.

  • Is the talk outline short? If so, the reviewer probably doesn’t have enough information to evaluate how well the presentation addresses the four main questions from above. A good outline is usually between 150 to 500 words. See talks 3, 4, and 5 from the screen shot above to see how this looks in practice!
  • Does the title, description or outline rely heavily on clichés? If so, the presentation is likely going to lack originality or quality – even if it is for fun and profit.
  • Is the talk overly introspective? Talks that focus heavily on the presenter (“My journey to …”) are hard to get right, since attendees usually need to be familiar with the presenter in order for for the talk to have an impact. Many review processes are blind (reviewers don’t know who submitted the session), so this kind of talk is almost impossible to judge.
  • Is the talk a minor variation of another talk? Some presenters submit essentially the same talk, but under two or three names or variations. What primarily drives a reviewer’s decision of a talk is the meat, not a bit of subtle word play in the title. They will likely recognize the multiple variations of the talk and select only one – but which one specifically is unpredictable. When votes are tallied, three talks with one vote each are much less likely to be selected than a single talk with three votes.
  • Is the submission rife with grammar and spelling errors? I don’t personally pay much attention to this category of mistake, but many reviewers do. If you haven’t spent the effort running your submission through spelling and grammar check, how much effort will you spend on the talk itself?

XOR is Not as Fancy as Malware Authors Think

FireEye recently posted some research about an attack leveraging the NetSupport Remote Access tool. The first stage of this attack uses a lot of obfuscation tricks to try to make reverse engineering more difficult.

David Ledbetter and I were chatting about some of the lengths the malware authors went through to obfuscate the content.

One of the major sources of complication is a complicated, iterative XOR:

(Image credit FireEye)

Unlike most malware that obfuscates content by XORing the content with a single byte / key, this malware appears to do something much more clever. See the content starting at ‘var tmpKeyLength = 1;’?

  1. XOR each character of the content with the first byte of the encryption key
  2. XOR characters of the content with bytes from the encryption key in the following pattern: 1, 2, 1, 2, 1, 2, 1, 2, …
  3. XOR characters of the content with bytes from the encryption key in the following pattern: 1, 2, 3, 1, 2, 3, 1, 2, 3, …

When malware uses step #1 alone -- or even a repeating single-key XOR -- I like to call it “Encraption”. It appears complicated, but is vulnerable to many forms of cryptanalysis and can be easily broken. Given that this malware did several levels of Encraption, did they manage to finally invent something more secure than a basic repeating key XOR?

Not even close.

XOR is Associative

One of the biggest challenges with using XOR in cryptography is that it is associative: you can rearrange parenthesis without impacting the final result. For example, consider again a single byte key and the following double XOR encryption:

  1. Take the content
  2. XOR each character by the value ‘123’
  3. XOR each character by the value ‘321’
  4. Emit the result

If we were to add parenthesis to describe the order of operations:

(Content XOR 123) XOR 321

Because XOR is associative, you can rearrange parenthesis (order of operations) to make it:

Content XOR (123 XOR 321)

Which gives 314:


So, encraption with two keys is still just encraption with a single key and is vulnerable to all of the same attacks.

But what about that rolling key?

The malware above used something more like a rolling key, however. It didn’t take a couple of single bytes. If the content was 100 bytes, it did 100 rounds of XOR based on the key. Surely that must be secure.

Fortunately? Unfortunately? The answer is no. If we remember the malware’s algorithm:

    1. XOR each character of the content with the first byte of the encryption key
    2. XOR characters of the content with bytes from the encryption key in the following pattern: 1, 2, 1, 2, 1, 2, 1, 2, …
    3. XOR characters of the content with bytes from the encryption key in the following pattern: 1, 2, 3, 1, 2, 3, 1, 2, 3, …

Consider the perspective of a single character. It gets encrapted by one byte of the key, and then a different byte of the key, and then a different byte of the key… and so on. And because XOR is associative, as we demonstrated above, that is the same thing as the single character being encrapted by a single byte.

A PowerShell Demonstration

Let’s take a look at a demonstration of this in PowerShell.

First, let’s look at a faithful re-implementation of the original algorithm:


$stringToEncrypt = [char[]] "Hello World!"
$encrypted = $stringToEncrypt.Clone()
$key = 97,4,13,252,119,31,208,156,196,56

$tmpKeyLength = 1
while($tmpKeyLength -le $key.Length)
    $tmpKey = $key[0..$tmpKeyLength]
    for($i = 0; $i -lt $encrypted.Length; $i++)
        $encrypted[$i] = $encrypted[$i] -bxor $tmpKey[$i % $tmpKey.Length]

([byte[]]$encrypted) | Format-Hex | Out-String

When you take a look at the result, here’s what you get:


Pretty impressive! Look at all those non-ASCII characters. This must be unbreakable!

To get the equivalent single-pass encraption key, we can just XOR the encrapted string with the original string. How?

XOR is Commutative

We can do this because XOR is commutative as well: you can rearrange the order of terms without impacting the result.

If Encrapted is:

Content XOR Key1 XOR Key2 XOR Key3

then we do:

Encrapted XOR Content

then we get:

Content XOR Key1 XOR Key2 XOR Key3 XOR Content

Because XOR is commutative, we can rearrange terms to get:

Content XOR Content XOR Key1 XOR Key2 XOR Key3

Anything XOR’d with itself can be ignored

One of the reasons XOR encraption works is that anything XOR’d with itself can be ignored. For example:

Encrypted = Content XOR Key

Decrypted = Encrypted XOR Key

By XORing some content with a key twice, you get back the original content. So back to where we got with the last section, if we XOR the final result with the original content and rearrange, we get:

Content XOR Content XOR Key1 XOR Key2 XOR Key3

That gives us an equivalent single key that we can use: Key1 XOR Key2 XOR Key3.

Here’s an example of figuring out this new single-pass key:


$newKey = New-Object 'char[]' $stringToEncrypt.Length
for($i = 0; $i -lt $stringToEncrypt.Length; $i++)
    $newKey[$i] = $encrypted[$i] -bxor $stringToEncrypt[$i] 

"Equivalent key: " + (([int[]]$newKey) -join ",")

And the result:


And to prove they give equivalent results:


$encrypted = $stringToEncrypt.Clone()
for($i = 0; $i -lt $encrypted.Length; $i++)
    $encrypted[$i] = $encrypted[$i] -bxor $newKey[$i]

"Easy encrypted:"
([byte[]]$encrypted) | Format-Hex | Out-String


The Nelson Moment

This is where we get to point and laugh a little. Remember how the malware repeatedly XORs the content with various bits of the key? It did far more damage than the malware author realized. Let’s consider the character in the content at position 0 for a moment:

    1. XOR with byte 0 of the key
    2. XOR with byte 0 of the key (thereby stripping the encraption altogether!)
    3. XOR with byte 0 of the key
    4. XOR with byte 0 of the key (thereby stripping the encraption altogether!)

Let’s consider byte 1 of the content

    1. XOR with byte 0 of the key
    2. XOR with byte 1 of the key
    3. XOR with byte 1 of the key (thereby stripping the work done in step 2)
    4. XOR with byte 1 of the key
    5. XOR with byte 1 of the key (thereby stripping the work done in step 4)

Depending on the length of the key and the content, this pattern of alternating doing work and removing the work that was previously done continues. This now takes what could have been a potentially strong key of 100s of bytes down to a super expensive way to compute single key!! Here’s a demo of encrapting some GUIDs:


Notice that some characters (28 at the beginning, c9 at offset 0x48) didn’t even get encrypted at all?

Remember folks, don’t roll your own crypto, and especially don’t roll your own crapto.

Part-of-Speech Tagging with PowerShell

When analyzing text, a common goal is to identify the parts of speech within that text – what parts are nouns? Adjectives? Verbs in their gerund form?

To accomplish this goal, the area of natural language processing in Computer Science has developed systems for Part of Speech tagging, or “POS Tagging”. The acronym preceded the version in Urban Dictionary 🙂

The version I used in University was a Perl-based Brill Tagger, but things have advanced quite a bit – and the Stanford NLP group has done a great job implementing a Java version with C# wrappers here:

The default English model is 97% correct on known words, and 90% correct on unknown words. “SpeechTagger” is a PowerShell interface to this tagger


By default, Split-PartOfSpeech outputs objects that represent words and the part of speech associated with them. The TaggerModel parameter lets you specify an alternate tagger model: the Stanford Part of Speech Tagger supports:

  • Arabic
  • Chinese
  • English
  • French
  • German
  • Spanish

The –Raw parameter emits sentence in the common text-based format for part-of-speech tagging, separating the word and its part of speech with the ‘/’ character. This is sometimes useful for regular expressions, or for adapting code you might have previously written to consume other part-of-speech taggers.

To install this project, simply run the following command from PowerShell:

Install-Module –Name SpeechTagger

Automatic Word Clustering: K-Means Clustering for Words

K-Means clustering is a popular technique to find clusters of data based only on the data itself. This is most commonly applied to data that you can somehow describe as a series of numbers.


When you can describe the data points as a series of numbers, K-Means clustering (Lloyd’s Algorithm) takes the following steps:

  1. Randomly pick a set of group representatives. Lloyd’s algorithm generally picks random coordinates, although sometimes picks specific random data points.
  2. Assign all of the items to the nearest representative.
  3. Update the group representative to more accurately represent its members. In Lloyd’s algorithm, this means updating the location of the representative to represent the average location of every item assigned to it.
  4. Revisit all of the items, assigning them to their nearest group representative.
  5. If any items shifted groups, repeat steps 3-5.

Applying this technique directly to words is not possible, as words don’t have coordinates. Because of that:

  • Randomly picking a coordinate cannot be used to randomly create a group representative.
  • Refining a group representative based on its current word cluster is more complicated than simply averaging the coordinates of the items in the cluster.

If we follow the philosophy of Lloyd’s algorithm, however, we can still create a version of K-Means Clustering for Words. In our implementation, we:

  1. Pick random words from the provided list as group representatives.
  2. Use Levenshtein Distance (string similarity) to measure "nearest group representative".
  3. Use word averaging to update the nearest group representative. Word averaging is a new word of the "average" word length, with characters at each position created by taking the most common letter at that position.

This is very computationally expensive for large data sets, but can provide some very reasonable clustering for small data sets.

To explore this further, you can download Get-WordCluster from the PowerShell Gallery. It’s as simple as:

  1. Install-Script Get-WordCluster –Scope CurrentUser
  2. (-split "Hello world Hell word SomethingElse") | Get-WordCluster -Count 3

Easily Search for Vanity Ham Call Signs

When you first get your ham radio license, the FCC gives you a random call sign based on your location and roughly your date of application. The resulting call sign is usually pretty impersonal, but the FCC lets you apply for a “vanity” call sign for free.

While the rules for these vanity call signs change depending on your license class (Technician, General, Extra), most of the good (shorter) vanity call signs that fall under the “extra” rules are taken. So realistically, your options will be likely be a 2x3 callsign (group C or group D).


One place to look for available call signs is at Please be kind and don’t pound their server! If you want to do hundreds / thousands of lookups, you can download the FCC database directly.

But if you only want to look at tens / hundreds, here’s a PowerShell script to help out.

$final3 = "LEE"
$prefixes = "WX","KZ"

foreach($prefix in $prefixes) {
    foreach($number in 0..9) {
        $call = $prefix + $number + $final3
        $wr = Invoke-WebRequest -Method Post -Body @{ call = $call }
        if($wr.Content -match "no records were found!") { $call }

Searching for Content in Base-64 Strings

You might have run into situations in the past where you’re looking for some specific text or binary sequence, but that content is encoded with Base-64. Base-64 is an incredibly common encoding format in malware, and all kinds of binary obfuscation tools alike.

The basic idea behind Base-64 is that it takes arbitrary binary data and encodes it into 64 (naturally) ASCII characters that can be transmitted safely over any normal transmission channel. Wikipedia goes into the full details here:

Some tooling supports decoding of Base-64 automatically, but that requires some pretty detailed knowledge of where the Base-64 starts and stops.

The Problem

Pretend you’re looking for the string, “Hello World” in a log file or SIEM system, but you know that it’s been Base-64 encoded. You might use PowerShell’s handy Base-64 classes to tell you what to search for:


That seems useful. But what if “Hello World” is in the middle of a longer string? Can you still use ‘SGVobG8gV29fbGQ=’? It turns out, no. Adding a single character to the beginning changes almost everything:


Now, we’ve got ‘IEhlbGxvIFdvcmxk’.

The main problem here is the way that Base-64 works. When we’re encoding characters, Base-64 takes 3 characters (24 bits) and re-interprets them as 4 segments of 6 bits each. It then encodes each of those 6 bits into the 64 characters that you know and love. Here’s a graphical example from Wikipedia:


So when we add a character to the beginning, we shift the whole bit pattern to the right and change the encoding of everything that follows!

Another feature of Base-64 is padding. If your content isn’t evenly divisible by 24 bits, Base-64 encoding will pad the remainder with null bytes. It will use the “=” character to denote how many extra padding blocks were used:


When final padding is added, you can’t just remove those "=” characters. If additional content is added to the end of your string (i.e.: “Hello World!”), that additional content will influence both the padding bytes, as well as the character before them.

Another major challenge is when the content is Unicode rather than ASCII. All of these points still apply – but the bit patterns change. Unicode usually represents characters as two bytes (16 bits). This is why the Base-64 encoding of Unicode content representing ASCII text has so many of the ‘A’ character: that is the Base-64 representation of a NULL byte.


The Solution

When you need to search for content that’s been Base-64 encoded, then, the solution is to generate the text at all possible three-byte offsets, and remove the characters that might be influenced by the context: content either preceding what you are looking for, or the content that follows. Additionally, you should do this for both the ASCII as well as Unicode representations of the string.

An Example

One example of Base-64 content is in PowerShell’s –EncodedCommand parameter. This shows up in Windows Event Logs if you have command-line logging enabled (and of course shows up directly if you have PowerShell logging enabled).

Here’s an example of an event log like that:


Here’s an example of launching a bunch of PowerShell instances with the –EncodedCommand parameter, as well as the magical Get-Base64RegularExpression command. That command will generate a regular expression that you can use to match against that content:


As you can see in this example, searching for the Base-64 content of “My voice is my” returned all four log entries, while the “My voice is my passport” search returned the single event log that contained the whole expression.

The Script

Get-Base64RegularExpression is a pretty simple script. You can use this in PowerShell, or any event log system that supports basic regular expression searches.


## Get-Base64RegularExpression.ps1
## Get a regular expression that can be used to search for content that has been
## Base-64 encoded

    ## The value that we would like to search for in Base64 encoded content

## Holds the various byte representations of what we're searching for
$byteRepresentations = @()

## If we got a string, look for the Unicode and ASCII representations of the string
if($Value -is [String])
    $byteRepresentations += 

## If it was a byte array directly, look for the byte representations
if($Value -is [byte[]])
    $byteRepresentations += ,$Value

## Find the safe searchable sequences for each Base64 representation of input bytes
$base64sequences = foreach($bytes in $byteRepresentations)
    ## Offset 0. Sits on a 3-byte boundary so we can trust the leading characters.
    $offset0 = [Convert]::ToBase64String($bytes)

    ## Offset 1. Has one byte from preceeding content, so we need to throw away the
    ## first 2 leading characters
    $offset1 = [Convert]::ToBase64String( (New-Object 'Byte[]' 1) + $bytes ).Substring(2)

    ## Offset 2. Has two bytes from preceeding content, so we need to throw away the
    ## first 4 leading characters
    $offset2 = [Convert]::ToBase64String( (New-Object 'Byte[]' 2) + $bytes ).Substring(4)

    ## If there is any terminating padding, we must remove the characters mixed with that padding. That
    ## ends up being the number of equals signs, plus one.
    $base64matches = $offset0,$offset1,$offset2 | % {
        if($_ -match '(=+)$')
            $_.Substring(0, $_.Length - ($matches[0].Length + 1))

    $base64matches | ? { $_ }

## Output a regular expression for these sequences
"(" + (($base64sequences | Sort-Object -Unique) -join "|") + ")"

TripleAgent: Even Zeroer-Tay Code Injection and Persistence Technique


We'd like to introduce a new Zero-Tay technique for injecting code and maintaining persistency against common advanced attacker toolkits dubbed TripleAgent. We discovered this by ourselves in our very advanced labs, and are in the process of registering a new vanity domain as we speak.

TripleAgent can exploit:

  • Every toolkit version
  • Every toolkit architecture (x86 and x64)
  • Every toolkit user (RED / PURPLE / APT / NATION STATE / etc.)
  • Every toolkit process (including PoC, GTFO, PoC||GTFO, METASPLOIT, UNICORN)

TripleAgent exploits a fundamental flaw in the design of commonly used advanced attacker toolkits, and therefore cannot be patched.

Code Injection

TripleAgent gives the defender the ability to inject any DLL into any attacker toolkit. The code injection occurs extremely early during the victim's process boot, giving the defender full control over the process and no way for the process to protect itself. The code injection technique is so unique that it's not detected or blocked by even the most advanced threaty threats.

Attack Vectors

  • Attacking persistence toolkits - Taking full control of ANY persistence toolkit by injecting code into it while bypassing all of its self-protection mechanisms. The attack has been verified and works on all bleeding-edge attacker toolkits including but not limited to: DoubleAgent.

Technical Deep, Deep, Deep, Dive

An example of an advanced attacker toolkit is known as DoubleAgent. This attacker toolkit exploits a fundamental issue in Windows, nay computing, NAY HUMANITY itself.

When this advanced toolkit runs, it is widely acknowledged to provide complete control over other unwitting applications. However, we can apply our new TripleAgent framework to this toolkit to completely neutralize it. Rather than have it infect target systems, we can write a few simple lines of code to make it instead launch the Windows Update settings dialog!

static BOOL main_DllMainProcessAttach(VOID)
    PROCESS_Create(L"c:\\windows\\system32\\cmd.exe"L"/c start ms-settings:windowsupdate");


    return TRUE;


Once run, we can see the significant impact of our new zero-tay technique. The first invocation installs our TripleAgent exploit, rendering the advanced "DoubleAgent" threat completely harmless during its second invocation.


Unfortunately, there are no mitigations or bypasses for this extremely advanced defensive technique. We do however offer highly-advanced next generation cyber threat intel cloud machine learning offensive services. Just putting that out there.

Adding a Let’s Encrypt Certificate to an Azure-Hosted Website

If you host your website in Azure, you might be interested in adding SSL support via Let's Encrypt. Azure doesn't offer any functionality to automate this or make it easy, but thankfully there are plenty of useful tools in the PowerShell community to make this easy.

  1. ACMESharp - A PowerShell module to interact with Let's Encrypt.
  2. Azure PowerShell - A set of PowerShell modules to interact with Azure.

What's been missing (until now!) is the glue. So now, here's the glue: Register-LetsEncryptCertificate.ps1.

So the steps:

  1. Install-Module AcmeSharp, Azure, AzureRM.Websites
  2. Install-Script Register-LetsEncryptCertificate.ps1
  3. Register-LetsEncryptCertificate -Domain -RegistrationEmail [email protected] -ResourceGroup exampleResourceGroup -WebApp exampleWebApp
  4. Visit



Why is SeDebugPrivilege enabled in PowerShell?

We sometimes get the question: Why is the SeDebugPrivilege enabled by default in PowerShell?

This is enabled by .NET when PowerShell uses the System.Diagnostics.Process class in .NET, which it does for many reasons. One example is the Get-Process cmdlet. Another example is the method it invokes to get the current process PID for the $pid variable. Any .NET application that uses the System.Diagnostics.Process class also enables this privilege.


You can see the .NET code that enables this here:

            NativeMethods.LUID luid = default(NativeMethods.LUID);
if (!NativeMethods.LookupPrivilegeValue(null, "SeDebugPrivilege", out luid))
IntPtr zero = IntPtr.Zero;
if (NativeMethods.OpenProcessToken(new HandleRef(null, NativeMethods.GetCurrentProcess()), 32, out zero))
NativeMethods.TokenPrivileges tokenPrivileges = new NativeMethods.TokenPrivileges();
tokenPrivileges.PrivilegeCount = 1;
tokenPrivileges.Luid = luid;
tokenPrivileges.Attributes = 2;
NativeMethods.AdjustTokenPrivileges(new HandleRef(null, zero), false, tokenPrivileges, 0, IntPtr.Zero, IntPtr.Zero);