Find My and Find Hub Network Research

August 27th, 2025 by Brian

The following is a summary of an internal research project conducted by Hudson H. and Andrew G.

Goals

The Find My network consists of a billion Apple devices (AirTags, iPhones, AirPods, etc) that communicate using Bluetooth and ultra-wideband, helping each other geolocate lost devices and report them to Apple’s servers.

Google has created the similarly-named but separate Find Hub network. Find Hub also relies on crowd sourced data to determine the location of Android devices.

Given how ubiquitous Apple and Android devices are, it is possible to connect to either of these distributed networks from almost any populated location.

What we set out to learn:

  • How is location reported when the lost device has no connection?
  • Can you send info back to that lost device?
  • How strictly do these corporations regulate their network (i.e. stalking alerts, snooping on users’ locations)?

Our ultimate questions:

  • Can arbitrary Bluetooth devices use these networks for free geolocation?
  • Can a secure communication channel for data transmission be established?

Abstract

This project demonstrates arbitrary data transmission using Offline Finding networks. Our custom protocol establishes a unidirectional communication channel that is robust, portable, and secure. It highlights critical differences between Apple’s Find My and Google’s Find Hub networks while exploring how unlicensed 3rd parties can piggyback off both of them. We propose deployment scenarios across a variety of architectures.


Find My Protocol

There have been many research papers published since crowdsourced location reports were added to Find My in 2019, so we read them all to aggregate the following understanding of the Offline Finding protocol. This involved understanding what Apple has patched since its creation.

Image of Find My control flow [2]


Pairing

Public, Private, and Symmetric keys are generated using Elliptic Curve Cryptography (ECC). These are stored in iCloud keychain, while device stores only symmetric and public key.


Lost

Once an iPhone’s connection is lost or an AirTag moves away from its owner, the device generates a new rolling public key every 24 hours and continually broadcasts that key using Bluetooth Low Energy (BLE).

This is the custom BLE beacon format for a lost Apple device. The 28-byte public key + metadata does not fit within a normal 31-byte BLE payload, so part of the key is stored as the MAC address. The MAC technically must be a random static address, but some research suggests that any beacon type (random static, public, NRPA, or RPA) will be accepted.

Image of Find My BLE packet structure [6]

Note the status byte: it’s supposed to indicate the battery level and device type, but the user can actually set it to whatever they want.


Finding

When nearby Apple devices (“Finders”) see any BLE beacon in the correct format, they create a packet with their own location, encrypt it, and upload it and the seen key to Apple in the next batch. This is what crowdsourced refers to: you’re getting the location of nearby devices, not the GPS data from your own device. Apple then aggregates them in the Find My app.

This is the structure of the HTTPS body that finders send to Apple’s servers. Note that the relevant parts of the location report (lat, long, and status) are all encrypted. Also included is the SHA256 hash of the seen public key.

Image of finder upload format [2]


Fetching

A user calculates the public keys that they’d expect the lost device to be transmitting for a given period, queries Apple’s servers for them, and decrypts the resulting location reports. Since the user and device both start from the same symmetric and public key, they can deterministically generate the same rolling keys for a given time period.

Apple’s servers act as a key-value store that allow us to probe for SHA256 hashes (keys) and returns location reports (values) if they exist. Up to 256 keys can be queried in a single HTTPS request, and each key can have up to 20 location reports associated with it. Location reports are stored for 7 days.


Conclusions

Two-way Communication

We were optimistic about sending data back to a lost device through the Find My network as we read about the ability to send commands to an AirTag (like playing sounds or wiping it). This would allow for possible Command and Control (C2) infrastructure.

Upon further reading and experimentation, however, we realized these actions are only sent through a local BLE or UWB connection. The Find My network does not support bidirectional communication between a user and their lost device unless the device is nearby. Our project is concerned with remote communication, so we contented ourselves with unidirectional data sending.

User Privacy

We agree with the majority of security researchers: Apple’s network is surprisingly charitable for individual users’ privacy. Apple has very little avenue of selling location data or tracking users.

The location reports are E2E encrypted, and only the private key from the user’s iCloud keychain can decrypt them.

Finders are authenticated as a legit Apple device using a mixture of SEP, CA signatures, a unique identifier, and a Elliptic Curve Digital Signature Algorithm (ECDSA) signature; however, they do not send their Apple ID. Apple cannot extrapolate a user’s position by looking at which finders uploaded their reports.

Lack of Authentication

The most interesting aspect of the Find My network is the fact that lost devices are not authenticated as legitimate Apple devices. Finds will upload location reports for any BLE beacon that follows the Find My format.

Additionally, any Apple ID can query for any public key hashes (but will only be able to decrypt those for which they have the key).

Stalking Alerts

How they work:

Apple’s Unwanted Tracking (UT) alerts show a notification when a suspicious device is detected moving with the user for at least 840 meters and 10 mins. The alert is not triggered immediately: it takes 8 hours during the day, 30 mins at night, and only 10 mins if the user returns to a Significant Location (like home). This suspicious lost device must be an AirTag or AirPod that is separated from its owner and broadcasting rolling public keys.

These alerts rely on the fact that the public key only switches every 24 hours. By tracking duplicate keys seen over time, the user can tell if the same AirTag is following them.

Unfortunately, these alerts fail under several conditions, as discovered by previous work [5]:

  • In the BLE advertisement, any beacon marked as either an Apple Device or Find My Device using the status byte will not generate stalking alerts.
  • A device that continuously cycles through new public keys will look like many new AirTags instead of the same one to nearby devices.

We tested both methods by taking our tracker home; we received the following alert while using the basic implementation and never while using either method for avoidance.

Transmission

Once we understood the Offline Finding network’s limitations and vulnerabilities, we set out to ensure that existing open-source tools still worked.


OpenHaystack

OpenHaystack [1] is an AirTag spoofing tool made by security researchers who were the first to reverse engineer Apple’s Find My network protocol. Their paper (“Who Can Find My Device?” [2]) is fundamental to creating custom Find My devices.

Using an Ubertooth, we were able to confirm that the static public key was being broadcasted:

To fetch the location reports, we upgraded OpenHayStack’s implementation to use the FindMy.py library [4]. It allows us to emulate a MacBook from any device and query for specific keys.

By taking our ESP32 for a walk around the business park, we demonstrated advanced location tracking.


SendMy

Positive Security’s Send My [3] is a POC tool for rudimentary data transmission using the Find My network.

We successfully replicated their results:

Limitations:

  • Only 1 bit of data per advertisement.
  • Limited hardware support (only ESP32).
  • Requires macbook to fetch location reports.
  • Very limited data can be sent.
  • Payload is stored in public key, so protocol is totally unencrypted. Any nearby sniffers could decode message.

Protocol Upgrades

With the knowledge that offline data transmission is possible, we began building a robust communication channel. The goal was to create frameworks for the device and client (which interacts with Apple’s servers to fetch location reports) that can send (and painlessly reconstruct) any data from any Bluetooth device.

The first step was to store our data payloads in private keys instead of public keys. By converting each payload to a its corresponding ECC public key before broadcasting over BLE, our data becomes cryptographically secure from sniffing. Each new chunk of data (of n bits, where n is the number of bits we encode in each advertisement) is a new private/public key pair. The following schema uses all 28 bytes of each private key while balancing messages per week, message length, future flexibility, and cryptographic security:

4 bytes2 bytes20 bytes2 bytes
Chunk indexMessage IDDevice IDData
UnsignedUnsigned  
Index of bit at which this data chunk startsThink port number/ID for each datastreamShared secret created while pairingSupports variable bit encoding

Tip: Translation: “At this offset within the stream, message A from device B has bytes x, y

To fetch a data chunk, our client computes all 2^n possible private keys that could have been broadcasted. Since it knows the device ID, message ID, and current chunk index, only the n bits of data must be brute forced. It finds the corresponding ECC public key, SHA256 hashes them, then queries Apple for all 2^n. Apple tells the client which key it has received location reports for.

Tip: Translation (assuming 2 bits/key): “At this offset for this message and device, are the bits 00, 01, 10, or 11?”

We quickly realized that how many bits we could send in each key was dependent upon how severely Apple throttles user key lookups. This project did not stress-test Apple’s hospitality, instead electing to typically use 8 bits/key. We faced no difficulty in querying 128,000+ keys (500 chunks * 256 keys/chunk) in the span of a couple minutes.

By using the status byte to store another byte of data, the system achieves n = 16 bits/advertisement (8 in the private key payload that require brute forcing, and 8 in the status byte for free).

Chunk failures (when none of the 2^n keys return a valid location) are an interesting case. It can mean 3 things:

  1. The chunk has not arrived yet. It takes time for the advertisements to be uploaded, which we profile more later. Originally, we would pause and retry every failed chunk several times, hoping to fetch the entire message in one go. While optimizing throughput, we began emphasizing entire message retries over chunk retries. By caching found chunks, the client can rapidly update as lost chunks are delivered to Apple. This was a significant step in making our client smarter and more robust.
  2. The chunk is missing. We discuss erasure correction shortly.
  3. The chunk is past the bounds of the message. To detect the end of a message, our simplest idea was the most successful. Once x chunks in a row fail (and we have an unbroken chain of resolved chunks up to this point), we assume that the message has ended. We deliberated sending the message length as metadata so that the client definitively knows when to stop querying, but elected to save time this way.

Synchronization

Everything we’ve outlined so far relies on the fact that the device and client can deterministically operate on the same message IDs and chunk numbers. Because this method is unidirectional, however, this assumption is problematic. How do we know if our device is sending a message? What if we miss messages because there’s no nearby finders? What if we don’t know when our device started transmitting?

It became apparent that an out-of-band synchronization channel was needed. This would be part of the key space that is set aside for denoting which messages have been sent.

We brainstormed many unsuccessful variations of how all possible 2^16 message IDs could be checked. Placing 1 byte in the key and the other in the status byte was promising, but different status bytes for duplicate keys overwrite each other. The sync packets are the most important chunks, because the client will not know to fetch the rest of the message if it doesn’t see them. For that reason, we wanted a protocol that could be expanded for arbitrary redundancy. We arrived at the following schema which spreads the message ID across 2 layers. The user can chose how many channels/dropboxes (with 2 layers each) they want.

Device: before the start of every message (and before successive repeats), transmit the 2 keys seen in the diagram (both with message ID = 0). One has the first byte of the message ID, the other has both.

  • After informing the client about what message it’s about to send, switch to that message’s keyspace and begin transmitting chunks.

Tip: Translation: device is placing a note in a mailbox (or several mailboxes!) that says “I’m about to start sending message x”

Client: whenever we don’t know what message to retrieve, check the layer 1 channel(s) for any new message indicators. It only has to brute-force 2^8 keys twice (1 for each layer) in order to search for possible message ID’s the device could be sending. When it sees a message ID that’s unaccounted for, it begins querying for that message’s contents.

Client periodically checks layer 1, and checks layer 2 when:

  1. A new key appears in layer 1.
  2. A report for a key has a different status byte.
    • Example: we received message 0x0011 already and take a break.
    • Query layer 1: get location reports for [ 0x0000 0x00 device_id 0x00 0x11 ] as expected.
    • However, some of the reports now have status byte = 0x33.
    • We know to check layer 2, and will see reports for both:
      • [ 0x0000 0x00 device_id 0x00 0x11 ]
      • and
      • [ 0x0000 0x00 device_id 0x33 0x11 ]
    • Even if we missed far more since 0x0011 (say message 0x2211 was sent in the meantime too), then we’ll also see [ 0x0000 0x00 device_id 0x22 0x11 ] in layer 2.

Tip: Translation: when the client doesn’t know what’s happening, it checks all the mailboxes for new notes (including looking underneath each paper to make sure it’s not covering up another one).


Erasure Correction

As we send increasing amounts of data over the Find My network, it became clear that packet loss was a minor issue. It’s rare for advertisements to not be picked up, but it becomes more likely if there’s not enough finders nearby or the message isn’t repeated enough times.

To combat this, we introduced Reed-Solomon erasure correction blocks to our messages. Using a predefined “code rate” (like 80%), the client can reconstruct the message if any 80% of the chunks are retrieved. The additional layer of handling on both sides of the transmission was a source of countless bugs, but once implemented Reed-Solomon was overall a large quality of life improvement. The slight overhead in message size is worth the breathing room.


Speed

The biggest point of latency in this protocol comes from waiting for nearby finders to pick up and report all of the advertisements.

Due to Apple finder characteristics, there is a hard limit on how much data each finder can report in a given time frame. Since each finder has 96 batches/day, we estimate a batch of 256 keys (512 data bytes) is uploaded every 15 mins. We estimate that for every +500 bytes of data you send with this method, you can expect to wait another +15 mins for your advertisements to be picked up. Of course, this estimate fluctuates with the number of finders nearby. In the best case where no other legit advertisements are reported, each finder device could upload 48 kB of our data per day.

Once all advertisements and corresponding location reports are stored in Apple’s servers, we can achieve download speeds of ~60 bits/s

Example of Reed-Solomon decoding allowing us to decode message early (10 bytes were still missing, but text could be recovered):


Aside: Deployment Scenarios

Once we achieved arbitrary data transmission, we began thinking of deployment scenarios for a unidirectional, low bitrate communication channel. Some ideas included:

  • Keylogger
  • Speech-to-text recording device
  • Passive BLE/WiFi sniffer
  • Images
  • Other embedded devices from previous research projects

It quickly became clear that use cases stretched across many different architectures and device types. This brought us back to a major goal for this project: portability.


Portability

As the project developed, it became clear that a unidirectional, low bitrate communication channel had a range of uses across the embedded software world. To support the wide variety of use cases, we made portability across several architectures and operating systems a priority.

Our Find My spoofing requires fine-tune control over Bluetooth Low Energy advertisement packets. To achieve this across different hardware, we could find the BT API provided by the OS/manufacturer. For example, we used the high-level ESP BT API up to this point. However, this becomes messy quickly. All of the APIs expect different formats, exert different levels of customization, and are not guaranteed to provide the commands we need.

Instead, we transitioned to HCI. Host Controller Interface is a protocol used to communicate between a host and its Bluetooth adapter/controller. Besides vendor-specific commands, HCI packet structure is consistent across different hardware. Therefore, constructing HCI commands that represent the BT services that we need (setting MAC address, modifying body, etc) is a one-time effort that makes all future portability much easier.

This formed the basis of our Hardware Abstraction Layer (HAL): a new device simply must find out how to send HCI packets to its BT radio and read responses (commonly through UART, USB, or SPI). We refactored our device-side codebase twice as we minimized the amount of platform-specific code. The resulting API layers allow for seamless transition from raw data, to Reed-Solomon encoded messages, to BT advertisements, to HCI packets, to transmission.


ESP32

After combing through documentation, we discovered a lower-level API (VHCI) for sending HCI commands.

Image of ESP32 HCI structure [7]


Linux

Besides waiting for a BT dongle to arrive and fixing some memory unsafe BlueZ code, Linux integration was a relatively painless process. Creating a raw socket for the correct interface allows for easy HCI communication. The program must be run as root, which is inevitable for the level of BT control we require. Our desktops were now AirTags!


nRF52 DK

The nRF52 DK is natively supported by platformIO, which we already had infrastructure for. We decided to use the Zephyr RTOS with it, since Zephyr was one of the RTOS’s that platformIO offered with it’s native support for the nRF52. From there, we looked at the Zephyr source code for its Bluetooth API to identify the lower layer HCI API.


TI Devices

Integrating TI microcontrollers is a work in progress. We wanted to use platformIO to keep a mostly unified build tooling for our firmware, but we learned that it doesn’t natively support most TI devices. We moved on to using TI’s own IDE called CCS. The first TI device we used was the TI LaunchXL 2640R2, the subject of a previous research project. After much troubleshooting, we concluded that the board was dead. We ended up going through every TI device in the building and found that every board either did not have Bluetooth (like a TI CC1310) or was dead. Regardless, we implemented the infrastructure for supporting TI devices through a Cmakelists.txt. TI strongly encourages (read: enforces) the use of their own proprietary software, including their own IDE, compiler, RTOS, and flashing software. It was a challenge to integrate the TI build tooling with our main codebase. Our solution:

  • Download the CCS IDE and create a sample project with the desired board. This downloads the correct SDK and compiler to build it.
  • Look at the output of the compiler and find the paths of the necessary dependencies (outlined in the CmakeLists.txt) that were just downloaded.
  • Use the TI Uniflash tool to flash the built firmware.

Scanning

Each of these supported platforms has their own potential uses for offline data transmission. As a test case, we wanted to scan nearby BLE devices on all of them. As the first real use case besides simple text, it shed light on several key areas:

  • Demonstrated consistent data transfer over time using the Find My network.
  • Standardized HCI response parsing as taking place in a second thread of execution instead of a callback. We were able to abstract data transfer between layers even further.
  • Prompted us to create an abstract class on the client side to support reconstruction of different types of data.

The actual scanning data was less illustrative than we hoped, but it still has some reconnaissance/intel value. We got creative with the graphs, sending abridged “update” packets for seen devices that allow us to track their proximity over time. This info also tells us if there’s Apple devices nearby, which could help the transmission device determine whether its advertisements will get picked up or not. Future improvements include adding WiFi scanning for richer profiling data.


More Networks?

There are many areas with few Apple finders where our protocol would struggle. Why limit ourselves to one Offline Finding network? With Android’s massive market share, Google Find Hub provided an enticing supplement to the project. The first step was to figure out the similarities and differences between how Apple and Google run their networks.

Images of Apple vs Android market shares [8]


Google Find Hub

Unlike Find My, there are very few research papers about Google Find Hub. Much of our info comes from a singular paper [9]. Although Find Hub (2013) was only released four years after Find My (2009), offline finding (crowdsourced location reports) was added much later–Apple in June 2019, Google in April 2024. Since Apple’s crowdsourcing has been released for much longer, it has already been thoroughly reverse engineered. We suspect that Google witnessed the abuse of the Find My network and designed Find Hub with heavy restrictions to prevent the same fate.

Find Hub Protocol

The basic workflow of Offline Finding networks holds true for Google’s Find Hub. These are some key differences:

  • Google knows which public keys (EIDs) belong to which devices and will not store location reports for those that are not associated with a registered device.
  • Users can only query location reports for trackers registered to their account.
  • Much harsher rate limiting and throttling is enforced. Users can only get location updates for a specific tracker once every few hours.
  • No status byte transmitted over the network.

GoogleFindMyTools

GoogleFindMyTools (made by the authors of the above paper) is the prominent open source Find Hub authority, akin to OpenHaystack. GoogleFindMyTools made key contributions in identifying the contents of protobufs for important GRPC services on Google’s Spot API. The author also discovered how to use Google’s Firebase project with the Spot API to query for location reports.

Our first step was to ensure that their spoofing method still worked. The company firewall prevented us from talking out on mtalk.google.com 5228, so we used Oniux to route our network traffic through Tor exit nodes. To look like a valid user, we made a burner Google account and signed into a Google Pixel created by Google’s own android emulator. Using GoogleFindMyTools, we were able to view location reports from our ESP32 in the emulator’s Find Hub app.

Limitations: the ESP32 could only broadcast a static key, and there was no way to delete devices registered to your account using GoogleFindMyTools.

Google Data Transmission

Basic Data Transmission

To our knowledge, Find Hub transmission had never been done before. We investigated Google’s heavily regulated system for weeks before coming up with a POC. The fundamentals of our protocol (device sends specific keys, client guesses the data bits while querying) remain the same, but Google did their best to make it difficult. Location reports will only be stored if the device with the associated EID has been registered by the time that report is uploaded. Therefore, each of the 2^n guesses for a data chunk is its own registered device. Typically for Google, n = 2 bits/key. Querying a single one of those guesses takes ~2.25 HTTP requests, not including the device registration or deletion.

To confirm that basic data transmission was possible, we performed a simple test with 2 registered devices. After querying them and finding the correct bit, we had to log into the Find Hub app on the Android emulator and delete the device from there. Although the method was functional, we realized that automation was needed for it be viable.

Finding Unknown GRPC services

We went on a quest for uncovered capabilities of the Find Hub network that could aid our transmission. We found a Github issue listing several of the GRPC services with the Spot API, seemingly found by decompiling a Find Hub apk. There were many interesting GRPC services listed, most prominently being DeleteBleDevice and ListEidsForBleDevices. Our first attempt in uncovering these services’ protobufs was to intercept calls from the browser version of Find Hub. As it turns out, the web version of Find Hub has limited capabilities compared to the Find Hub Android app, and we were unable to trigger any interesting GRPC services. The next best option was to reverse engineer the Android Find Hub app. We used the Android emulator with Magisk to root the emulator device and began testing. In the midst of that work, we discovered an overlooked source that revealed the contents of the protobufs. Our client was now able to employ the DeleteBleDevice and ListEidsForBleDevices services.

Client Upgrades

With the power to automatically delete devices, our device registration transmission method is viable. Because of the overhead involved with each chunk, we spent lots of time looking for optimizations on the client side.

It became apparent that we needed to register as many devices as we could. Upon startup, our client spawns 8 threads to create ~100 devices on our burner Google account. Their info is also prefetched for later interactions.


Another improvement came from examining ListEidsForBleDevices. We realized that UploadPrecomputedKeyIds, an existing GRPC service, could allow us to overwrite the key associated with each device. Instead of deleting and recreating all 100 devices once their keys have been guessed, this strategy repurposes them to brute force the next chunks. Creation and deletion only need to be done when a message is first seen and completely fetched.


By parallelizing and retrying failed HTTP requests, we were able to substantially increase download speeds for Apple and Google messages.


Google Blocking Tor

Two days before our presentation, critical requests to Google’s servers stopped working. We ran unit tests performing GRPC services that we knew were not blocked by the CCSW firewall, and they were successful. Upon running the same unit tests under Oniux, every single one failed. After combing through git logs and confirming that nothing on our end changed, we find it likely that Google has blocked the Tor exit nodes. Thankfully, we could obtain a firewall exception and stop using Oniux entirely.


Find My/Hub Integration

Now that we can send arbitrary data over an additional Offline Finding network, how does that help us?

This is mostly up to the user and is based on what they value. Transmission over Google’s network is very slow, so it is best used as backup/auxiliary source compared to Apple (although if needed, it can certainly stand alone). Our client supports any use cases that integrate both networks, including:

  • Improve throughput by sending data proportionally across the networks based on their speeds (if Apple is 10x faster, send 1 Google chunk every 10 Apple ones).
  • Use as redundancy for important chunks.
  • Send every chunk over both networks and consult Google iff Apple lost a packet.
  • Use as a side channel to communicate metadata.

Sending Images

A popular suggested deployment scenario was sending images from a connectionless state. The big question was whether we could compress the images small enough to where our communication channel could handle them. Based on our theories about payload size and wait time, we wanted to get squash images into the 500-1000 byte range so that they could send in ~30 minutes. We experimented with many different combinations of compression algorithms before finding a process that balanced image recognizability and size. We downscale by 75%, compress heavily with JPEG, and losslessly zip.

The following images were sent using the Find My network in around 30 minutes.

The compression levels can easily be tweaked according to user specifications: for example, this high quality (2.7 kB, wow!) image was sent in 1.5 hours.


Conclusion

  • Offline, unidirectional data transmission and location tracking.
  • Download speeds of 60 bits/s.
  • Custom protocol design that maximizes cryptographic security and throughput.
  • Supports Apple’s Find My and Google’s Find Hub networks in tandem.
  • Low risk of detection (and difficult to track back to the user, even for Apple/Google).
  • Easily cross-platform (only requiring a hardware abstraction layer for HCI sending). Currently supports ESP32, Linux, nRF42_DK, and TI microcontrollers (WIP).
  • Supports different data types (nearby BLE device scanning, text, images).
  • Low energy and cost (free communication using network we don’t control).
  • Out-of-band synchronization between device and client.
  • Erasure correction (Reed-Solomon encoding) for message reconstruction.
  • Avoids the industry standard for anti-stalking mechanisms.

Security FAQ

Can 3rd parties (finders, Apple, or nearby sniffers) spy on the data payload?

  • The data payload is an ECC private key. The corresponding public key is embedded in the public Bluetooth Low Energy advertisements, but the conversion is not reversible. Each beacon looks like a totally random AirTag.
    • Note: the status byte is encrypted by the finders by the time Apple sees it, but it is not encrypted during advertisements. That is 8 bits of data from our datastream that could be intercepted by nearby sniffers (including the finders themselves). If this is unacceptable, then status byte data can be disabled.

Are we vulnerable to replay attacks?

  • Slightly. An attacker doesn’t need to understand the public key in order to repeat it. Similarly to above, this poses an issue with the status byte. They could poison half of a chunk by repeating with a bogus status byte. The solution to this is to disable the status byte and accept lower data rates.

Will these devices trigger Apple’s stalking alerts?

  • No. They check for AirTags that repeat the same public key for 24 hours. Our protocol’s keys change rapidly. If we repeated the same message for several hours, then an alert could trigger. It would look like hundreds of AirTags were following.

Can an attacker query our data in our stead if they know our protocol? Can they send false info?

  • Yes, but only if they know the 20-byte device ID that we’ve chosen. This cryptographically-secure shared secret that is set while pairing ensures that our device and client have a protected channel of communication.

Do we have to worry about conflicts with real AirTag keys?

  • Not that we’ve witnessed. The 28-byte keyspace seems sufficiently large, and reports are only stored for a week.

Can this method be correlated to our device or identity?

  • Apple could identify your burner account querying for thousands of location reports, but we haven’t gotten banned yet. Apple also cannot see where those E2E-encrypted location reports are coming from. Google certainly cares about the spam, but they’d have to work very hard to get around burner accounts. To nearby devices, it simply looks as though hundreds of AirTags are surrounding them. They cannot easily trace the source of the advertisement spam.

Resources

[1]: Openhaystack tool

[2]: Who Can Find My Device?

[3]: Positive Security’s Send My

[4]: FindMy.py Library

[5]: Who Tracks the Trackers? Circumventing Apple’s Anti-Tracking Alerts in the Find My Network

[6]: Track You: A Deep Dive into Safety Alerts for Apple Airtags

[7]: ESP32 Undocumented Bluetooth Commands: Clearing the Air

[8]: iPhone vs Android User Stats (2025 Data)

[9]: Okay Google, Where’s My Tracker? Security, Privacy, and Performance Evaluation of Google’s Find My Device Network

[11]: TagAlong: Free, Wide-Area Data-Muling and Services

“Uncontrolled mules move in unknown ways.”

[12]: Where Is My Tag? Unveiling Alternative Uses of the Apple FindMy Service

[13]: Apple AirTag Reverse Engineering

[14]: Tracking You from a Thousand Miles Away! Turning a Bluetooth Device into an Apple AirTag Without Root Privileges

[15]: AirGuard – Protecting Android Users From Stalking Attacks By Apple Find My Devices

[16]: How we built the new Find Hub network with user security and privacy in mind

[17]: 5 ways to use the new Find My Device on Android

[18]: Find Hub Network Accessory Specification