How I use Twitter


For most people, using the official Twitter client works fine. It’s optimized to show you new content you might be interested in, makes it easy to follow new users, and shows content that might be most relevant to you first. If you have an engineering mindset, chances are you want to be in control of what you see in your timeline.

I use Twitter to stay up to date with certain people. I want to hear about new projects or new content they published, new blog posts, thoughts of them, etc. I’m not interested in hearing political opinions, sport scores, etc, which I already have Facebook for. If I follow someone, I’ll read every single tweet from them. For the last 5 years, I didn’t miss a tweet in my timeline, so I have to be very careful about who to follow, and what content to see. So I set out to customize Twitter to archive that goal, and to only see about 50-75 tweets per day.


I’ve been using Tweetbot for the last few years, the technique described below might work with other third party Twitter clients also.

Muted Keywords

Very basic list of words, that as soon as a tweet contains one of them, it will be hidden, examples include:

  • headphone jack
  • drake
  • podcast
  • president

Muted users

I stopped using this feature, now that I use secret lists to follow people (see below), and disabled RTs. Muting users for a given time period or forever is useful for a few situations:

  • Some users in your timeline might promote a product, so you can mute that product
  • If a user is at a conference/event you’re not interested in, you can mute them for a few days

Muted Regexes

A very powerful feature of Tweetbot is to define a regex to hide tweet. I use it to hide annoying jokes like

  • remember \w+
  • german word for \w+
  • \w+ is the new \w+

or to hide tweets from people that think we’re interested about their airplane delays or #sports

  • (virgin|Virgin|@United|delta|Delta|JetBlue|jetblue)
  • For every #sports #event there are also custom-made mute filters (truncated): (?#World Cup)(?i)((?# Terms)(Brazil\s*2014|FIFA|World\s*Cup|Soccer|F(oo|u)tbal)|(?# Chants)(go a l |[^\w](ole\s*){2,})|(?# Teams)(#(B....

Hide all mentions

This very much changed my whole timeline (for the better). Turns out, I follow people for their announcements, what they work on, what they’re doing, what they’re thinking about, etc. I actually don’t want to see 2 people communicating publicly using @ mentions, unless it’s a topic I’m interested in. So I started hiding all tweets that start with an @ symbol using a simple Tweetbot regex

  • ^@

If I want to see responses to a tweet, I’d swipe to the left side, and see all replies.

Muted Clients

Muting certain clients has been amazing, very easy to set up and cleans up your timeline a lot. Some of the clients I mute:

  • Buffer (to avoid “content marketing”, so many companies make the mistake of tweeting the same posts every week or so using Buffer)
  • IFTTT (lots of people use that to auto-post not original content)
  • Spotify
  • Foursquare (I follow friends on Swarm already, no need to see it twice)
  • Facebook

Secret Lists

One issue I had was to balance the number of tweets in my timeline, and then also being polite and following friends. To avoid the whole “Why are you not following me?” conversation, I now use a private list to follow about 300 people only. I open sourced the script I used to migrate all the people I used to follow over to a private list.

Disable RTs

This has been a great change: As described above, I follow people for what they do, what they think of, and what they’re working on. Some people have the habit of RTing content that might be interesting, but not relevant to why I want to stay subscribed to their tweets. On Tweetbot, you can.

Muting hashtags

I thank everyone for using hashtags for certain events, making it easy to hide them from my timeline :)

Disadvantages of this approach

Some of the newer Twitter features don’t have an API, and therefore can’t be offered by Tweetbot. This includes Polls, Moments and Group DMs. Since I don’t want to miss group DMs, I set up email notifications for Twitter DMs, and set up a Gmail filter to auto-archive emails that are not from group DMs.


I’ve spent quite some time optimizing that workflow, and it’s very specific, and probably not useful for most people. I try to minimize my time on social media, I only browse my Twitter feed when I have a few minutes to kill on the go. Meaning I work through my timeline only on my iPhone, and reply to mentions and DMs only on my Mac. I don’t want to come across uninterested, I do follow people on Facebook, I do read news and stay up to date. Twitter is a place for very specific content for me, and I want to keep using it as that.

Unless otherwise mentioned in the post, those projects are side projects which I work on on weekends and evenings, and are not affiliated with my work or employer.

Tags: twitter   |   Fork on GitHub

follow.user - track the user's website activities, steal their data & credentials and add your own ads to any website in your iOS app


Most iOS apps need to show external web content at some point. Apple provided multiple ways for a developer to do so, the official ones are:

Launch a URL in Safari

This will use the app switcher to move your own app into the background. This way, the user has their own browser (Safari), with their session and content blocker, browser plugins (e.g. 1Password), etc. As launching Safari puts your app into the background, many app developers are worried the user doesn’t come back to them.

Check out the first video to see how this looks in action ➡️

Use in-app SFSafariViewController

Many third party iOS apps use this approach (e.g. Tweetbot).

It allows an app developer to use the built-in Safari with all its features, without making the user leave your application. It features all the Safari features, but from within your application.

Check out the second video to see how this looks in action ➡️

Current state with larger social network apps

Many larger iOS apps re-implemented their own in-app web browser. While this was necessary many years ago, nowadays it’s not only not required any more, it actually adds a major risk to the end-user.

Those custom in-app browsers usually use their own UI elements:

  • Custom address bar
  • Custom SSL indicator
  • Custom share button
  • Custom reload button

Problems with custom in-app browsers

If an app renders their own WKWebView, they not only cause inconvenience for the user, but they actually put them at serious risk.


User session

The user’s login session isn’t available, meaning if you get a link to e.g. an Amazon product, you now have to login and enter your 2-factor authentication code to purchase a product.

Browser extensions

If the user has browser extensions (like password managers), they won’t have access to them in a custom in-app browser.

Deep linking

Deep linking itself has multiple open issues on the iOS platform. By using a custom in-app browser, it adds an extra layer that doesn’t work well with deep linking. Instead of opening the Amazon app when tapping on an Amazon link in “Social Media App X”, it opens the product in a plain web-view, with no login session, and no way to open the product in the app.

Content blockers

If the user has content blockers installed, they’re not being used by custom in-app browsers.


There is no way for the user to store the current URL in their bookmarks.

Share a website

Apps use this opportunity to force their users to use whatever “social features” they think are useful to them. Usually that means locking the user into their ecosystem, and not allowing people to share the content on the platform of their choice. There should be an explicit App Store rule against this.

Security & Privacy

Using a custom in-app browser, allows the app developer to inject ANY JavaScript code into the website the user visits. This means, any content, any data and any input that is shown or stored on the website is accessible to the app.


This is basically the main reason why in-app browsers are still a thing: It allows the app maintainer to inject additional analytics code, without telling the user. This way, the app’s developer can track the following:

  • How long does the user visit the linked website?
  • How fast does the user scroll?
  • Which links does the user open, and how long do they stay on each of them?
  • Combined with watch.user, the app can record you while you browse third party websites, or even use the iPhone X face sensor to parse your face
  • Every single tap, swipe or any other gesture
  • Device movements, GPS location (if granted) and any other granted iOS sensor, while the app is still in the foreground.

User credentials

Any app with an in-app browser can easily steal the user’s email address, passwords and two-factor authentication codes. They can do that by injecting JavaScript code that bridges the data over to the app, or directly to a remote host. This is simple, it’s basically code like this:

email = document.getElementById("email").value
password = document.getElementById("password").value

That’s all that’s needed: just inject the code above to every website, run it on every user’s key stroke, and you’ll get a nice list of email addresses and passwords.

To run JavaScript in your own web view, you can just use

NSString *script = @"document.getElementById('password').value";

[self evaluateJavaScript:script completionHandler:^(id result, NSError *error) { ... }];

User data

Once the user is logged in, you also get access to the full HTML DOM + JavaScript data & events, which means you have full access to whatever the user sees. This includes things like your emails, your Amazon order history, your friend list, or whatever other data/website you access from an in-app web view.


Usually the web browser has a standardised way of indicating the SSL certificate next to the browser’s URL. In the case of custom in-app browsers, the SSL logo is being added by the app’s author, meaning you trust the app’s maintainer to only show the logo if it’s actually a valid SSL certificate.


Custom in-app browsers allow all app developers to inject their own ad system into any website that’s shown as part of their app. But not only that, they can replace the ads identifier of ads that are already shown on the website, so that the revenue goes directly to them, instead of the website owner.

And more

These are just some of the things that immediately come to my mind, every time I use an in-app browser, there are probably a lot more evil things a company or SDK could be doing.

How can we solve this?

  • Reject apps that don’t use SFSafariViewController or launch Safari directly to show third party website content
  • There should be exceptions, e.g. if a webview is used to show parts of the UI, or dynamic content, but it should be illegal to use webviews to show a linked or third party website

I also filed a radar for this issue.

Unless otherwise mentioned in the post, those projects are side projects which I work on on weekends and evenings, and are not affiliated with my work or employer.

Tags: security, privacy, sdks   |   Fork on GitHub

Trusting third party SDKs

Third-party SDKs can often easily be modified while you download them! Using a simple person-in-the-middle attack, anyone in the same network can insert malicious code into the library, and with that into your application, as a result running in your user’s pockets.

31% of the most popular closed-source iOS SDKs are vulnerable to this attack, as well as a total of 623 libraries on CocoaPods. As part of this research I notified the affected parties, and submitted patches to CocoaPods to warn developers and SDK providers.

What are the potential consequences of a modified SDK?

It’s extremely dangerous if someone modifies an SDK before you install it. You are shipping your app with that code/binary. It will run on thousands or millions of devices within a few days, and everything you ship within your app runs with the exact same privileges as your app.

That means any SDK you include in your app has access to:

  • The same keychain your app has access to
  • Any folders/files your app has access to
  • Any app permissions your app has, e.g. location data, photo library access
  • iCloud containers of your app
  • All data your app exchanges with a web server, e.g. user logins, personal information

Apple enforces iOS app sandboxing for good reasons, so don’t forget that any SDK you include in your app runs inside your app’s sandbox, and has access to everything your app has access to.

What’s the worst that a malicious SDK could do?

The attack described here shows how an attacker can use your mobile app to steal sensitive user data.

Web Security 101

To understand how malicious code can be bundled into your app without your permission or awareness, I will provide necessary background to understanding how a MITM attack works and how to avoid it.

The information below is vastly simplified, as I try to describe things in a way that a mobile developer without too much network knowledge can get a sense of how things work and how they can protect themselves.


HTTP: Unencrypted traffic, anybody in the same network (WiFi or Ethernet) can easily listen to the packets. It’s very straightforward to do on unencrypted WiFi networks, but it’s actually almost as easy to do so on a protected WiFi or Ethernet network. There is no way for your computer to verify the packets came from the host you requested data from; Other computers can receive packets before you, open and modify them and send the modified version to you.

HTTPs: With HTTPs traffic other hosts in the network can still listen to your packets, but can’t open them. They still get some basic metadata like the host name, but no details (like the body, full URL, …). Additionally your client also verifies that the packets came from the original host and that no one on the way there modified the content. HTTPs is based on TLS.

How a browser switches from HTTP to HTTPS

Enter “” in your web browser (make sure to use “http”, not “https”). You’ll see how the browser automatically switches from the unsafe “http” protocol to “https”.

This switch doesn’t happen in your browser but comes from the remote server (, as your client (in this case the browser) can’t know what kind of protocol is supported by the host. (Exception for hosts that make use of HSTS)

The initial request happens via “http”, so the server has no choice but to respond in clear text “http” to tell the client to switch over to the secure “https” protocol with a “301 Moved Permanently” response code.

You probably already see the problem here: since the response is being sent in clear text also, an attacker can modify that particular packet and replace the redirect destination URL to stay unencrypted “http”. This is called SSL Stripping, and we’ll talk more about this later.

How network requests work

Very simplified, network requests work on multiple layers. Depending on the layer, different information is available on how to route a packet:

  • The lowest layer (Data Link Layer) uses MAC addresses to identify hosts in a network
  • The layer above (Network Layer) uses IP addresses to identify hosts in the network
  • The layers above add port information and the actual message content

If you’re interested, you can learn how the OSI (Open Systems Interconnection) model works, in particular the implementation TCP/IP (e.g.

So, if your computer now sends a packet to the router, how does the router know where to route the packet based on the first layer (MAC addresses)? To solve this problem, the router uses a protocol called ARP (Address Resolution Protocol).

How ARP works and how it can be abused

Simplified, the devices in a network use ARP mapping to remember where to send packets of a certain MAC address. The way ARP works is simple: if a device wants to know where to send a packet for a certain IP address, it asks everyone in the network: “Which MAC address belongs to this IP?”. The device with that IP then replies to this message ✋

Unfortunately, there is no way for a device to authenticate the sender of an ARP message. Therefore an attacker can be fast in responding to ARP announcements sent by another device, basically saying: “Hey, please send all packets that should go to IP address X to this MAC address”. The router will remember that and use that information for all future requests. This is called “ARP poisoning.”

See how all packets are now routed through the attacker instead of going directly from the remote host to you?

As soon as the packets go through the attacker’s machine there is some risk. It’s the same risk you have when trusting your ISP or a VPN service: if the services you use are properly encrypted, they can’t really know details about what you’re doing or modify packets without your client (e.g. browser) noticing. As mentioned before there is still basic information that will always be visible such as certain metadata (e.g. the host name).

If there are web packets that are unencrypted (say HTTP) the attacker can not only look inside and read their content, but can also modify anything in there with no way of detecting the attack.

Note: the technique described above is different from what you might have read about the security issues with public WiFi networks. Public WiFis are a problem because everybody can just read whatever packets are flying through the air, and if they’re unencrypted HTTP, it’s easy to read what’s happening. ARP pollution works on any network, no matter if public or not, or if WiFi or ethernet.

Let’s see this in action

Let’s look into some SDKs and how they distribute their files, and see if we can find something.


Open source Pods: CocoaPods uses git under the hood to download code from code hosting services like GitHub. The git:// protocol uses ssh://, which is similarly encrypted to HTTPs. In general, if you use CocoaPods to install open source SDKs from GitHub, you’re pretty safe.

Closed source Pods: When preparing this blog post, I noticed that Pods can define a HTTP URL to reference binary SDKs, so I submitted multiple pull requests (1 and 2) that got merged and released with CocoaPods 1.4.0 to show warnings when a Pod uses unencrypted http.

Crashlytics SDK

Crashlytics uses CocoaPods as the default distribution, but has 2 alternative installation methods: the Fabric Mac app and manual installation, which are both https encrypted, so not much we can do here.


Let’s look at a sample SDK, the docs page is unencrypted via http (see the address bar)

So you might think: “Ah, I’m just reading the docs here, I don’t care if it’s unencrypted”. The problem here is that the download link (in blue) is also transferred as part of the website, meaning an attacker can easily replace the https:// link with http://, making the actual file download unsafe.

Alternatively an attacker could just switch the https:// link to the attacker’s URL that looks similar

And there is no good way for the user to verify that the specific host, URL or S3 bucket belongs to the author of the SDK.

To verify this I’ve set up my Raspberry PI to intercept the traffic and do various SSL Stripping (downgrading of HTTPS connections to HTTP) across the board, from JavaScript files, to image resources and of course download links.

Once the download link was downgraded to HTTP, it’s easy to replace the content of the zip file as well:

Replacing HTML text on the fly is pretty easy, but how can an attacker replace the content of a zip file or binary?

  1. The attacker downloads the original SDK
  2. The attacker inserts malicious code into the SDK
  3. The attacker compresses the modified SDK
  4. The attacker looks at packets coming by, and jumps in to replace any zip file matching a certain pattern with the file the attacker prepared

(This is the same approach used by the image replacement trick: Every image that’s transferred via HTTP gets replaced by a meme)

As a result, the downloaded SDK might include additional files or code that was modified:

For this attack to work, the requirements are:

  • The attacker is in the same network as you
  • The docs page is unencrypted and allows SSL Stripping on all links

Localytics resolved the issue after disclosing it, so both the docs page, and the actual download are now HTTPs encrypted.


Looking at the next SDK, we have a HTTPs encrypted docs page, looking at the screenshot, this looks secure:

Turns out, the HTTPs based website links to an unencrypted HTTP file, and web browsers don’t warn users in those cases (some browsers already show a warning if JS/CSS files are downloaded via HTTP). It’s almost impossible for the user to detect that something is going on here, except if they were to actually manually compare the hashes provided. As part of this project, I filed a security report for both Google Chrome (794830) and Safari (rdar://36039748) to warn the user of unencrypted file downloads on HTTPs sites.


At the time I was conducting this research, the AWS iOS SDK download page was HTTPs encrypted, however linked to a non-encrypted zip download, similarly to the SDKs mentioned before. The issue has been resolved after disclosing it to Amazon.

Putting it all together

Thinking back about the iOS privacy vulnerabilities mentioned before (iCloud phishing, location access through pictures, accessing camera in background), what if we’re not talking about evil developers trying to trick their users… What if we talk about attackers that target you, the iOS developer, to reach millions of users within a short amount of time?

Attacking the developer

What if an SDK gets modified as you download it using a person-in-the-middle attack, and inserts malicious code that breaks the user’s trust? Let’s take the iCloud phishing popup as an example, how hard would it be to use apps from other app developers to steal passwords from the user for you, and send them to your remote server?

In the video below you can see a sample iOS app that shows a mapview. After downloading and adding the AWS SDK to the project, you can see how malicious code is being executed, in this case an iCloud phishing popup is shown and the cleartext iCloud password can be accessed and sent to any remote server.

The only requirement for this particular attack to work is that the attacker is in the same network as you (e.g. stays in the same conference hotel). Alternatively this attack can also be done by your ISP or the VPN service you use. My Mac runs the default macOS configuration, meaning there is no proxy, custom DNS or VPN set up.

Setting up an attack like this is surprisingly easy using publicly available tools that are designed to do automatic SSL Stripping, ARP pollution and replacing of content of various requests. If you’ve done it before, it will take less than an hour to set everything up on any computer, including a Raspberry Pi, which I used for my research. The total costs for the whole attack is therefore less than $50.

I decided not to publish the names of all the tools I used, nor the code I wrote. You might want to look into well-known tools like sslstrip, mitmproxy and Wireshark

Running arbitrary code on the developer’s machine

The previous example injected malicious code into the iOS app using a hijacked SDK. Another attack vector is the developer’s Mac. Once an attacker can run code on your machine, and maybe even has remote SSH access, the damage could be significant:

  • Activate remote SSH access for the admin account
  • Install keylogger to get admin password
  • Decrypt the keychain using the password, and send all credentials to remote server
  • Access local secrets, like AWS credentials, CocoaPods & RubyGems push tokens and more
    • If a developer now has a popular CocoaPod, you can spread more malicious code through their SDKs
  • Access literally any file and database on your Mac, including iMessage conversations, emails and source code
  • Record the user’s screen without them knowing
  • Install a new root SSL certificate, allowing the attacker to intercept most of your encrypted network requests

To prove that this is working, I looked into how to inject malicious code in a shell script developers run locally, in this case BuddyBuild:

  • Same requirements as in the previous example, attacker needs to be in the same network
  • BuddyBuild docs told users to curl an unencrypted URL piping the content over to sh, meaning any code the curl command returns will be executed
  • The modified UpdateSDK is provided by the attacker (Raspberry PI), and asks for the admin password (normally BuddyBuild’s update script doesn’t ask for this)
  • Within under a second, the malicious script does the following
    • Enable SSH remote access for the current account
    • Install & setup a keylogger that auto-starts when you login

Once the attacker has the root password and SSH access, they can do anything listed above.

BuddyBuild resolved the issue after reporting it.

How realistic is such an attack?

Very! Open your Network settings on the Mac, and take a look at the list of WiFi networks your Mac was connected to. In my case, my MacBook was connected to over 200 hotspots. How many of them can you fully trust? Even in a trustworthy network, there could still be other machines that got hacked previously which are doing remote controlled attacks (see section above)

SDKs and developer tools become more and more a target for attackers. Some examples from the past years:

  • Xcode Ghost affected about 4,000 iOS apps, including WeChat:
    • Attacker gains remote access to any phone running the app
    • Show phishing popups
    • Access and modify the clipboard (dangerous when using password managers)
  • The NSA worked on finding iOS exploits
  • Pegasus: malware for non-jailbroken iPhones, used by governments
  • KeyRaider: Only affected jailbroken iPhones, but still stole user-credentials from over 200,000 end-users
  • Just the last few weeks, there have been multiple posts about how this affects web projects also (e.g. 1, 2)

and many, many more. Another approach is getting access to the download server (e.g. S3 bucket using access keys) and replacing the binary. This happened multiple times in the past few years, for example Transmission Mac app incident. This opens a whole new level of area of attack, which I didn’t cover in this blog post.

Conferences, hotels, coffee shops

Every time you connect to the WiFi at a conference, hotel or coffee shop, you become an easy target. Attackers know that there is a high number of developers during conferences and can easily make use of the situation.

How can SDK providers protect their users?

This would go out of scope for this blog post. Mozilla offers a security guide that’s a good starting point. Mozilla provides a tool called observatory that will do some automatic checks of the server settings and certificates.

While doing this research starting on 23rd November 2017, I investigated 41 of the most popular mobile SDKs according to AppSight (counting all Facebook and Google SDKs as one, as they share the same installation method - skipping SDKs that are open source on GitHub)

  • 41 SDKs checked
    • 23 are closed source and you can only download binary files
    • 18 of those are open source (all of them on GitHub)
  • 13 are an easy target of person-in-the-middle attacks without any indication to the user
    • 10 of them are closed source SDKs
    • 3 of them are open source SDKs, meaning the user can either download the SDK via unencrypted HTTP from the official website, or securely clone the source code from GitHub
  • 5 of the 41 SDKs offer no way to download the SDK securely, meaning they don’t support any HTTPs at all, nor use a service that does (e.g. GitHub)
  • 31% of the top used SDKs are easy targets for this attack
  • 5 additional SDKs required an account to download the SDK (do they have something to hide?)

I notified all affected in November/December 2017, giving them 2 months to resolve the issue before publicly talking about it. Out of the 13 affected SDKs

  • 1 resolved the issue within three business days
  • 5 resolved the issue within a month
  • 7 SDKs are still vulnerable to this attack at the time of publishing this post.

The SDK providers that are still affected haven’t responded to my emails, or just replied with “We’re gonna look into this” - all of them in the top 50 most most-used SDKs.

Looking through the available CocoaPods, there are a total of 4,800 releases affected, from a total of 623 CocoaPods. I generated this data locally using the Specs repo with the command grep -l -r '"http": "http://' *.

Open Source vs Closed Source

Looking the number above, you are much likely to be affected by attacks if you use closed source SDKs. More importantly: When an SDK is closed source, it’s much harder for you to verify the integrity of the dependency. As you probably know, you should always check the Pods directory into version control, to detect changes and be able to audit your dependency updates. 100% of the open source SDKs I investigated can be used directly from GitHub, meaning even the 3 SDKs affected are not actually affected if you make sure to use the version on GitHub instead of taking it from the provider’s website.

Based on the numbers above it is clear that in addition to not being able to dive into the source code for closed source SDKs you also have a much higher risk of being attacked. Not only person-in-the-middle attacks, but also:

  • The attacker gains access to the SDK download server
  • The company providing the SDK gets compromised
  • The local government forces the company to include back-doors
  • The company providing the SDK is evil and includes code & tracking you don’t want

You are responsible for the binaries you ship! You have to make sure you don’t break your user’s trust, European Union data protection laws (GDPR) or steal the user’s credentials via a malicious SDK.

Wrapping up

As a developer, it’s our responsibility to make sure we only ship code we trust. One of the easiest attack vectors right now is via malicious SDKs. If an SDK is open source, hosted on GitHub, and is installed via CocoaPods, you’re pretty safe. Be extra careful with bundling closed-source binaries or SDKs you don’t fully trust.

Since this type of attack can be done with little trace, you will not be able to easily find if your codebase is affected. By using open source code, we as developers can better protect ourselves, and with it, our customers.

Check out my other privacy and security related publications.

Thank you

Special thanks to Manu Wallner for doing the voice recordings for the video.

Special thanks to my friends for providing feedback on this post: Jasdev Singh, Dave Schukin, Manu Wallner, Dominik Weber, Gilad, Nicolas Haunold and Neel Rao.

Unless otherwise mentioned in the post, those projects are side projects which I work on on weekends and evenings, and are not affiliated with my work or employer.

Tags: security, privacy, sdks   |   Fork on GitHub

Mac Privacy: Sandboxed Mac apps can record your screen at any time without you knowing


Any Mac app, sandboxed or not sandboxed can:

  • Take screenshots of your Mac silently without you knowing
  • Access every pixel, even if the Mac app is in the background
  • Use basic OCR software to read the text on the screen
  • Access all connected monitors

What’s the worst that could happen?

  • Read password and keys from password managers
  • Detect what web services you use (e.g. email provider)
  • Read all emails and messages you open on your Mac
  • When a developer is targeted, this allows the attacker to potentially access sensitive source code, API keys or similar data
  • Learn personal information about the user, like their bank details, salary, address, etc.


This project is a proof of concept and should not be used in production. The goal is to highlight a privacy loophole that can be abused by Mac apps.

How can I protect myself as a user?

To my knowledge there is no way to protect yourself as of now.


There are lots of valid use-cases for Mac apps to record the screen, e.g. 1Password 2fA support, screen recording software or even simple screen sharing via your web browser or Skype. However there must be some kind of control:

  • The App Store review process could verify the Sandbox entitlements for accessing the screen
  • Put the user in charge with a permission dialog
  • Additionally the user should be notified whenever an app accesses the screen.

Of course, I also filed a radar (rdar://37423927) to notify Apple about this issue.

How does it work?

A developer just needs to use CGWindowListCreateImage to generate a capture of the complete screen within an instant:

CGImageRef screenshot = CGWindowListCreateImage(

NSBitmapImageRep *bitmapRep = [[NSBitmapImageRep alloc] initWithCGImage:screenshot];

In my experiments, I piped the generated image over to a OCR library and was able to get all text that was rendered on the user’s machine.

Unless otherwise mentioned in the post, those projects are side projects which I work on on weekends and evenings, and are not affiliated with my work or employer.

Tags: mac, screenshot, privacy   |   Fork on GitHub