Resolving error Intune “The sync could not be initiated (0x80190190)”

If you’re running into the following error trying to get a device to sync with Intune:

The sync could not be initiated (0x80190190)

You probably have checked the Windows Event Log and also seen this error:

MDM Session: OMA-DM message failed to be sent. Result: (Bad request (400).).

I recently ran into this situation, and the cause was that I had opted into the Windows 10 default of signing in with a Microsoft Account. Under “Access Work or School” in Computer Settings, it showed both my work account and home account.

If you try to remove your home account, you might get an unhelpful error that “This PC isn’t joined to a domain”.

The solution for me was to:

  • temporarily move to signing in with a local account
  • Disconnect all registered work accounts
  • Re-connect the work account (just disconnecting the Microsoft Account doesn’t resolve the problem)
  • Move back to signing in with a Microsoft Account

After that was done, the error should go away and you should again have a healthy Intune managed machine.

Work Simulator 2020

Can you survive March 2020?


It’s March 4, 2020. A pandemic grips the world, so you’re working from home. Can you last the month?

[Web] [Windows] [Mac]

List of InfoSec Cognitive Biases

The mind is an incredibly complex organ. While all of us attempt to be mostly logical and rational in our day-to-day thought processes and decision making, we are hampered by an enormous number of cognitive biases. Cognitive biases are specific natural tendencies of human thought that often result in irrational decision making, and there are hundreds of them. Everybody has them them and is impacted by them – it is only through awareness that you can take steps to counteract them.  

One of my favourite examples is Loss Aversion. Imagine a game that costs $100 to enter. Most folks would decline to play this game if the odds were that you had a 49% chance of losing your money, and a 51% chance of doubling it. A purely rational decision maker would play this game as often as they could.

The key realization is that being aware of biases helps you limit how much impact they have on your decision making.

InfoSec Cognitive Biases

Like every other avenue of human thought, one place we are impacted by cognitive biases is the Information Security community. While many traditional cognitive biases apply directly to the Information Security context, there are plenty that are unique to our space and are worthy of additional awareness.

This post started as a thread on Twitter, and with the participation of several folks has become quite a useful list of thinking patterns to be careful about when making decisions in the realm of Information Security. Thank you to @passingthehash, @gdbassett, @kjstillabower, @joonatankauppi, @marshray, and @mrjbaker for your contributions!

One important point is that a cognitive bias is completely different than being factually incorrect. A cognitive bias represents a flawed mode of thinking, not a flawed thought. For example, a website that disallows special characters in a user name is a decision, not a bias. While a cognitive bias may have been involved in arriving at a factually incorrect decision, the decision itself is not the bias.

Do you have any biases you find common or unique to the security industry? Please comment below and I’ll add them!

Absolutism bias

Description: The tendency to undervalue mitigations that have less-than-ideal security, yet still produce significant risk and harm reduction
Example: Criticizing users or applications that use SMS-based two-factor authentication, despite the alternative often being no two-factor authentication at all.

Actor bias

Description: The tendency to include actor / operator intent in evaluating the security of a system
Example: Designing cryptographic master keys that allow "the good people" to decrypt private data of "the bad people".

Anchoring bias

Description: The tendency to let early facts you learn in a security investigation overly influence your decision-making process
Example: Dismissing an attack as "just drive-by ransomware", missing attackers that use ransomware to burn infrastructure after a much more damaging intrusion.

Authority bias

Description: The tendency to overvalue the opinions that an expert in one domain has about an unrelated domain
Example: Computer Security experts discussing geopolitical events.

Availability bias

Description: The tendency to focus on applications or systems that are recent, nearby, or under active development
Example: Doing deep security analysis of a new buildout at corporate head offices, while systems at an acquired branch office go unpatched.

Bandwagon bias

Description: The tendency to assign excessive merit to a behaviour or technology because others have done so, or that it has historically been done that way
Example: Websites that prevent copy + paste of passwords, despite making it difficult for users of password managers

Burner bias

Description: The tendency to overestimate one's ambient risk when at industry events, or to adopt some security practices only at those events
Example: Only using a VPN or being suspicious of ATM machines while at Black Hat / DEF CON.

Capability bias

Description: The tendency to overvalue the defensive impact of mitigating a published attack when viewed in the context of an adversary that can adapt
Example: Blocking PowerShell on a server, while still allowing arbitrary unsigned executables.

Commutative bias

Description: The tendency to undervalue the likelihood of an attack that only requires the linking of two or more highly-likely events
Example: Thinking that an internal system is highly protected, despite everybody in the company having access to it - and phishing campaigns industry-wide having nearly a 100% success rate.

Domain bias

Description: The tendency to focus on risks and solutions closely related to one's domain of expertise, rather than all risks to a system
Example: Cryptographic experts adding hardware security modules to an architecture, despite pressing application and network security weaknesses.

Endorsement bias

Description: The tendency to place trust in systems or mechanisms that have financial ability as their only barrier to entry
Example: Making security decisions based on "signed code", despite code signing certificates being available to anybody for $85.

Environment bias

Description: The tendency to undervalue risks to a system when analyzed against minor changes to its threat model
Example: Useful "find my phone" applications that become weapons in the context of domestic abuse.

Fatalism bias

Description: The tendency to think of a system as only compromised or not, without investing in post-breach processes and controls
Example: Threat modeling sessions that include the phrase, "well if they got in there, it's game over."

Headline bias

Description: The tendency to use the summary / headline of an event to understand risk, rather than working to understand mitigating conditions
Example: Mocking Linux for the CVE-2019-14287 "SUDO Backdoor", despite most articles properly explaining the rare and nonstandard configuration that would lead to this being a security vulnerability.

High-profile bias

Description: The tendency to prioritize high-profile events in the media, rather than risks associated with the target environment
Example: Rushing to address CPU side-channel attacks, despite a large fleet of unpatched servers.

Hyperfocus bias

Description: The tendency to inconsistently evaluate security of an application based on its unique capabilities
Example: Criticizing an application for a flaw in a security feature that no comparable application even implements

Impact bias

Description: The tendency to require working proof of a weakness (or impact of a weakness) in a system to sufficiently account for its risk
Example: An unmitigated SQL injection bug that doesn't get fixed until you demonstrate the extraction of data.

Measurability bias

Description: The tendency to place inappropriate weight on the security of a system based on analysis of a measurable security property without regard to context
Example: Criticizing (or applauding) the cryptographic cipher strength used in a system, even when that use has no confidentiality or integrity impact.

More-is-better bias

Description: The tendency to believe that measurable security settings continue to provide return on investment as that control is increased in the "more secure" direction
Example: Recognizing that never-expiring passwords might be a risk, so aggressively pursuing shorter and shorter password expiration durations.

Motivation bias

Description: The tendency to undervalue the risk to a system due to perceived lack of motivation of attackers to target that system
Example: Acknowledging a vulnerability yet dismissing the impact because attackers wouldn't be interested - despite the existence of threat groups that scan the entire internet daily to compromise anything they find exposed.

Novelty bias

Description: The tendency to focus on mitigating the novel aspects of an attack, rather than the root causes and more core defensive mitigations
Example: Focusing on unique command-and-control mechanisms leveraged by an actor, rather than mitigating how they got access in the first place.

Obscurity bias

Description: The tendency to overvalue the security benefit of keeping implementation details secret
Example: Requiring security pen testers to engage in "black box" audits of applications, rather than providing access to source code.

Popularity bias

Description: The tendency to inconsistently evaluate security of an application based on its popularity
Example: Criticizing a popular application for a security weakness that all comparable applications also exhibit.

Publicity bias

Description: The tendency to overestimate the soundness of a decision until subject to broader scrutiny
Example: Deciding to not fix a security issue, yet reversing on this decision as management or the public learns about the risk.

Selection bias

Description: The tendency to make absolute security judgments based on a non-statistical observation of outcomes
Example: Evaluating the security of an application based on the number of CVEs reported on it without accounting for popularity or amount of focus given by security researchers.

Client IP Address Disclosure in various consumer mail servers


When email users of several email services send mail using mechanisms other than that service's web interface (i.e.: their phone or laptop’s email program), services commonly include the user’s IP address in message headers. This information disclosure lets recipients of these messages perform some privacy-invasive actions, such as:

    • Approximate geographical location of the sender
    • Correlation of separate email addresses, but sent by the the same sender
    • Broadband and / or cellphone provider

Users looking to send email in a manner that keeps this information private from message recipients should use either the web interface or an alternative mail provider.


  • Gmail
  • Google Suite
  • Apple (,
  • Office 365
  • (consumer) in uncommon configurations
  • Most ISP email providers

Not impacted:

  • Yahoo, AOL
  • (consumer) in most common configurations
  • Protonmail

Edited 4/13/2020 - A previous version of this post singled out Gmail. While this is the most popular email provider impacted by this flaw in common configurations, they are not the only one.


While the RFC requires that email programs identify themselves to SMTP servers, consumer mail is a relatively new thing as far as internet time goes. It is a change in threat model compared to the work-centric mainframe systems that were at the core when the RFCs were written. Today, people use consumer email systems in privacy sensitive situations that they didn’t before - whistleblowing, at-risk communities, avoiding domestic abuse, and more.

People that are uncomfortable with a mail recipient knowing their approximate geographical location should avoid using email applications to send email via the impacted email services, or use a privacy-preserving consumer email service.


Non-web email clients generally use the Simple Mail Transfer Protocol (SMTP) to connect and send mail via their mail service provider. Examples of these non-web email clients are the mail applications built into phones (Android, iOS) or desktop / laptop operating systems (Apple Mail, Windows Mail).

When these applications connect to their mail server's SMTP server to send an outgoing mail, part of the protocol requires a “HELO” or “EHLO” message. RFC 821 describes this exchange as:

      At the time the transmission channel is opened there is an
      exchange to ensure that the hosts are communicating with the hosts
      they think they are.

      The following two commands are used in transmission channel
      opening and closing:

         HELO <SP> <domain> <CRLF>

         QUIT <CRLF>

      In the HELO command the host sending the command identifies
      itself; the command may be interpreted as saying "Hello, I am


                     Example of Connection Opening

         R: 220 BBN-UNIX.ARPA Simple Mail Transfer Service Ready
         R: 250 BBN-UNIX.ARPA

                               Example 5


Most mail clients include the user’s public IP address and host name as part of this exchange, with Apple Mail even including the computer name and IP address on the internal network used to send the mail. One could argue that mail clients should not send this information as part of the exchange, but some SMTP servers validate that the actual IP address of the sender matches the content of the SMTP exchange as a form of spoofing protection.

As mail servers transport the message from its original SMTP server to the destination email server, much of this conversation is retained as part of the message’s mail header content. Microsoft has a good resource on how to see and interpret these mail headers here:, but an example looks like this:


As you can see in the “Received: from” headers above, some SMTP severs redact the sender’s client IP out of the rest of the exchange, replacing it instead with the address of the SMTP server. Two examples of email services that perform this client IP redaction are (as above) and However, as noted above, there are some scenarios in which does include this information.


The SMTP servers of some services do not perform this redaction, which provides the following data to recipients of the message:



Users looking to send email in a manner that keeps this information private from message recipients should use either the service's web interface or an alternative mail provider.

Disclosure Timeline


March 9, 2020 – Reported to Google security
March 10-19, 2020 – Assisted with steps to reproduce
March 20, 2020 – Resolved by Google as “Won’t Fix (Intended Behavior)”
March 20-31, 2020 – Worked to confirm that there were no future plans to fix
April 2, 2020 – Got confirmation that the team was aware of the issue and has no plans to fix


March 9, 2020 - Reported to Microsoft security
March 19, 2020 - Got confirmation that redaction was intentional for Consumer Outlook. This blog post and responses to it uncovered scenarios where this redaction wasn't applying, and this is still under investigation.


April 3, 2020 - Reported to Apple security

Searching for Content in Base-64 Strings

You might have run into situations in the past where you’re looking for some specific text or binary sequence, but that content is encoded with Base-64. Base-64 is an incredibly common encoding format in malware, and all kinds of binary obfuscation tools alike.

The basic idea behind Base-64 is that it takes arbitrary binary data and encodes it into 64 (naturally) ASCII characters that can be transmitted safely over any normal transmission channel. Wikipedia goes into the full details here:

Some tooling supports decoding of Base-64 automatically, but that requires some pretty detailed knowledge of where the Base-64 starts and stops.

The Problem

Pretend you’re looking for the string, “Hello World” in a log file or SIEM system, but you know that it’s been Base-64 encoded. You might use PowerShell’s handy Base-64 classes to tell you what to search for:


That seems useful. But what if “Hello World” is in the middle of a longer string? Can you still use ‘SGVobG8gV29fbGQ=’? It turns out, no. Adding a single character to the beginning changes almost everything:


Now, we’ve got ‘IEhlbGxvIFdvcmxk’.

The main problem here is the way that Base-64 works. When we’re encoding characters, Base-64 takes 3 characters (24 bits) and re-interprets them as 4 segments of 6 bits each. It then encodes each of those 6 bits into the 64 characters that you know and love. Here’s a graphical example from Wikipedia:


So when we add a character to the beginning, we shift the whole bit pattern to the right and change the encoding of everything that follows!

Another feature of Base-64 is padding. If your content isn’t evenly divisible by 24 bits, Base-64 encoding will pad the remainder with null bytes. It will use the “=” character to denote how many extra padding blocks were used:


When final padding is added, you can’t just remove those "=” characters. If additional content is added to the end of your string (i.e.: “Hello World!”), that additional content will influence both the padding bytes, as well as the character before them.

Another major challenge is when the content is Unicode rather than ASCII. All of these points still apply – but the bit patterns change. Unicode usually represents characters as two bytes (16 bits). This is why the Base-64 encoding of Unicode content representing ASCII text has so many of the ‘A’ character: that is the Base-64 representation of a NULL byte.


The Solution

When you need to search for content that’s been Base-64 encoded, then, the solution is to generate the text at all possible three-byte offsets, and remove the characters that might be influenced by the context: content either preceding what you are looking for, or the content that follows. Additionally, you should do this for both the ASCII as well as Unicode representations of the string.

An Example

One example of Base-64 content is in PowerShell’s –EncodedCommand parameter. This shows up in Windows Event Logs if you have command-line logging enabled (and of course shows up directly if you have PowerShell logging enabled).

Here’s an example of an event log like that:


Here’s an example of launching a bunch of PowerShell instances with the –EncodedCommand parameter, as well as the magical Get-Base64RegularExpression command. That command will generate a regular expression that you can use to match against that content:


As you can see in this example, searching for the Base-64 content of “My voice is my” returned all four log entries, while the “My voice is my passport” search returned the single event log that contained the whole expression.

The Script

Get-Base64RegularExpression is a pretty simple script. You can use this in PowerShell, or any event log system that supports basic regular expression searches.

You can find it on the PowerShell Gallery:

Install-Script Get-Base64RegularExpression.ps1 

Searching for Content in XOR “Encrypted” Data

A while back, we talked about a common challenge in the security industry – searching for some known bad content (i.e.: “Invoke-WebRequest”) in content that you know has been encoded in base64. In a really cool bout of co-discovery, others simultaneously wrote similar implementations. Since then, this approach is now in the process of being integrated into YARA. Very cool times!

Another situation you might have run across is malware authors “encrypting” their content with a static XOR key – a process I like to call “encraption”. One of the neat things about XOR encraption is that you use a single-byte key to encode the data by simply using the XOR operator on each byte of the data. To reverse the process, you just do it again. Despite being horrible from a security perspective, it is somewhat reliable at basic obfuscation to break string searching and simple signatures.

This pattern of decoding content (Base64, XOR, etc.) before running it is extremely common – and is a major driver behind why we added the Antimalware Scan Interface in Windows. This is great at stripping these layers of obfuscation from content at runtime.

But what about static analysis or log hunting?

Like the challenge we had with Base64, SIEM systems don’t generally offer a way to decrapt embedded XOR content to let you search within it. But they do offer regular expressions. Can we take a similar approach to what we did in Base64 – generate a regular expression that matches content in XOR-encoded strings? It turns out, yes!

[Aside – in another wonderful bout of co-discovery, YARA added XOR encoding for files in August 2018.]

Let’s take a simple example – data that has been encrapted directly.


So a little script that reverses this and emits the output looks like this:


One of the key weaknesses of XOR is that there are only 255 possible XOR keys. If this script’s content made it into our SIEM, we could simply brute force the search. We could search for (“encrapted” BXOR 1) and then (“encrapted” BXOR 2) and then … and then (“encrapted” BXOR 40). Eventually, we would end up searching for for “MFKZIX\ML” and find it. And fortunately, Regular Expressions support searching for multiple patterns all at once, so we can have a script simply generate a regular expression for all possible XOR keys.



The full regex is pretty long (255 elements), but this is a portion of what it looks like under the hood:


Now, XOR content is rarely encoded in scripts directly. Depending on the XOR key, the content will usually end up containing bytes that are not valid for use within a string. Usually, you’ll find that scripts have base64-encoded the XOR encraption.

For this scenario, we can leverage the “-Raw” parameter of Get-XorRegularExpression. This will return the raw bytes (rather than the escaped Regex representation), which we can then feed into our base64 regex generator. The result is quite a beast (765 elements: 3 base64 representations of each XOR key), but still a valuable source to hunt with.

Here’s an example of this happening in a script directly (taken from the AMSI blog post earlier):

In this example, the malware author uses a Unicode encoding of the string, so we use the “-Unicode” parameter of Get-XorRegularExpression to have it operate against the Unicode string.


While large, this is a regex we can now use against SIEM systems as well. Here’s an example of searching (and finding!) this content in PowerShell’s Script Block logs in Azure Sentinel:


And for some additional fun, we can even use the –Raw parameter of Get-Base64RegularExpression to generate Yara rules out of these byte sequences.



So, with a bit of creativity, we can now search for base64 content, XOR encoded content, and more in any SIEM that supports regular expressions. Enjoy!

You can download these scripts from the PowerShell Gallery:

Install-Script Get-Base64RegularExpression –Scope CurrentUser
Install-Script Get-XorRegularExpression –Scope CurrentUser
Install-Script New-YaraStringSearchRule –Scope CurrentUser

Star Trek TOS Science, Engineering, Command Embroidery Patterns

If you’re looking to get one of the Star Trek TOS patches (Science, Engineering, or Command) embroidered on something, they often charge a lot of money to convert the logo to the format that embroidery machines understand.


Here are version I created in the Brother PES format, as well as in the native format I created it in (EmbroideryWare STICH).

Command - [Brother PES] [EmbroideryWare STICH]

Science - [Brother PES] [EmbroideryWare STICH]

Engineering - [Brother PES] [EmbroideryWare STICH]

Dragon Ball Z Logo Embroidery Pattern

If you’re looking to get the Dragon Ball Z logo embroidered on something, they often charge a lot of money to convert the logo to the format that embroidery machines understand.


Here’s a version I created in the Brother PES format, as well as in the native format I created it in (EmbroideryWare STICH).

[Brother PES]

[EmbroideryWare STICH]

PowerShell Logo Embroidery Pattern

If you’re looking to get the PowerShell logo embroidered on something, they often charge a lot of money to convert the logo to the format that embroidery machines understand.


Here’s a version I created in the Brother PES format, as well as in the native format I created it in (EmbroideryWare STICH).

[Brother PES]

[EmbroideryWare STICH]

WebSockets from Scratch


In the web application world – especially single-page applications – smooth and fluid interaction is key. For many years, these applications have been doing a pretty good job of getting this fluid interaction though AJAX techniques and browser support for XMLHttpRequest. One issue, however, is that XMLHttpRequest requires that all of your communication go through an text-based HTTP protocol. Another issue is that XMLHttpRequest doesn’t let a server initiate communication back to connected clients. Instead, clients need to continuously poll the server to find out if it has anything to say.

To solve that need, browser vendors introduced the WebSocket protocol. The WebSocket protocol is a persistent, two-way TCP connection between a WebSocket client (traditionally a browser) and a server.

You might wonder: “That’s all well and good, but why should I care?” That’s a good question with a simple answer: Smile This is a really cool project designed by Tim McGuffin (@NotMedic). Whatever you draw on the website goes onto his badge – via a 64x32 LED board, and a bunch of more cool software.


Tim carried this around DEF CON this year, and people had lots of fun Smile


But can we have some _real_ fun? Smile

Digging into the Implementation

When you dig into the code a little bit, you can see how the browser does the communication. For every pixel you draw on the canvas, it converts it into a JSON object with a command called “DRAW”, and “DATA” that represents an X, Y, Color combination. Then it sends that string data to a connected WebSocket (exampleSocket.send(…)).


Partial Automation – Code Generation

The first step to having fun with Tim’s badge was through code generation. It is really easy in PowerShell and .NET to iterate through the pixels of an image (bitmap, PNG, etc.). Each pixel you access gives you its color. From there, I took a picture I manually resized to the correct dimensions and had PowerShell generate JavaScript code that I could copy + paste into the Developer Tools console.


And from there, the Mona Lisa made her first appearance on Tim’s badge Smile


That was fun, but copying + pasting code that PowerShell generated into a browser still felt a little hacky. Why don’t I just talk to the WebSocket directly from PowerShell? There are a few C# libraries for doing this out there, but I thought it would be a fun / interesting project to implement the protocol from scratch. So that’s what we’re going to do Smile

WebSockets from Scratch

The WebSocket protocol is defined by RFC6455, which goes through the protocol in great detail.

Initial Upgrade Request

For the first phase of the connection, the browser makes a standard TCP connection to the remote server. Here’s how to do that in PowerShell:


In that TCP connection, it makes a standard HTTP request, requesting an upgrade to the websocket protocol. Part of this HTTP request includes a Sec-WebSocket-Key header, which is intended to ensure that random HTTP requests can’t be retargeted to WebSocket servers, and that WebSocket client requests can’t be targeted to arbitrary other TCP servers. Here’s an example, which hard-codes the key for demonstration purposes.


Once the server accepts the connection upgrade, a well-written client will verify that the key included in the server’s response was correctly derived from the Sec-WebSocket-Key you provided. This is what a server response looks like:


Data Frames

Once the connection has been upgraded, client applications send frames of data. This isn’t the raw data itself – there is a structure around the data to describe it appropriately to the server accepting the connection.


To hold the frame data in PowerShell, we start with defining a byte array. The first 8 bits are:

  • FIN: 1 bit describing whether this is the final frame or not. In my experimentation with Chrome, this was always set to ‘1’.
  • RSV1, RSV2, and RSV3: 1 bit each that should always be zero unless you’re implementing a special protocol on top of WebSockets.
  • OPCODE: 4 bits. The one used by Chrome by default is “TEXT FRAME”, which has the value of 0001.

Putting these 8 bits together gives you a single byte with the bits of 10000001. So that’s how we start each frame:

The next 8 bits (the next byte) are:

  • MASK: Whether the data is “masked” by a random obfuscation key. This is recommended for the web browser developers themselves so that malicious web applications can’t cause arbitrary content to be written to the underlying TCP connection itself.
  • Payload Length: If the content is less than 126 bytes, this represents the payload length directly. Otherwise it needs to be the value ‘126’ with the next two bytes representing the actual payload length.

Since we are always supposed to mask the data, we define our mask as ‘1. However, this is supposed to be a bit at the leftmost position in the byte, so we need to shift it left by 7 places.


Since the mask and the payload length need to share the same byte (MASK being the leftmost bit and Payload Length being the rest), we use the –bor (Binary Or) bitwise operator to combine them into a single byte and then add that byte to the frame.


In the situation where the message length is greater than 126, though, the next two bytes need to also say how long the payload is. In C# and PowerShell, the way to get the bytes of a number is through the BitConverter.GetBytes() API. This API returns the bytes as they are represented by your processor. On most processors, this is Little Endian, where the least significant digits (think the 1s and 10s columns in decimal) come first.


The WebSocket protocol, however, requires that these bytes are in “Network Order” (the opposite of Little Endian), so we need to reverse them.


After the frame lengh, we need to provide the actual masking key. This is 4 bytes. As mentioned earlier, these are values used to obfuscate the content being transmitted to the server so that malicious web applications can’t communicate at the TCP level directly to the websocket server. The obfuscation algorithm is a simple XOR, where the four bytes in the masking key are used in a round-robin fashion against bytes in the content. Because this is was just for fun and the security protection is irrelevant, I provided a static masking key of “0” since anything XOR’d by 0 doesn’t change anything. That way, I didn’t have to implement the masking algorithm and the server wouldn’t notice Smile


Next, we get to add our actual content to the frame – in this case, a message representing the JSON of pixels that we want to draw.


Finally, we write this frame to the TCP connection and flush it


When all is said and done, pretty pictures show up on Tim’s badge Smile

When I sent Rick Astley, Defender ATP alerted Microsoft IT that one of its computers had PowerShell making raw TCP connections to a newly-registered domain. Amazing!


I tried to WannaCry Tim’s badge, and he still hasn’t paid Sad smile


And with true raw WebSockets control, I was able to give Tim’s badge a demoscene Smile Smile

So that’s how to go about WebSockets from Scratch. Hope this helps!