Scoring a job in the US as a software engineer.

This post was originally published in October of 2013 so some information might be outdated.

I usually write very technical things about iOS in my blog but today I’ll make an exception. I get asked about this a lot so I decided to write an article about it so I could point people to instead of repeating myself all the time.

A Brazilian colleague just asked me the following on LinkedIn:

“How are the opportunities for developers in the US? Is the procedure complex to get a job and a work visa?”

It’s easier than you think. Or at least for me it was easier than I thought. But it’s a long process and the selection process is different from the Brazilian and also between different companies so you’ll have to adapt and study a bit.

But first a disclaimer: I am NOT an expert on this matter. All I’m going to talk about comes from my personal experience when I was hired by Pocket Gems in June 2012 and might not be the case for every other candidate and any other company out there so take all this with a grain of salt.

I’ll start with the visa process because it’s the first thing you need to understand to start the process.

The first thing to consider is the period in which visas are granted by the government. The fiscal year starts in October here, so visas always start in October. Meaning, if you have your visa approved in June, will only be able to move in October.

The U.S. government has a cap on H1B visas (the most common type for foreign employees) which is currently set at 65,000 per fiscal year. And the government starts accepting visa applications in the beginning of April. So, if you receive an offer in December, the company will have to wait until the beginning of April to submit your paperwork and then wait until October to actually move.

Another important detail is that currently this limit is reached very early. I received my job offer in June 2012 and that year there were still visas available (the cap was reached a week after my request was submitted). But this year the competition for visas is so fierce that the government stopped accepting requests two or three weeks after the start of the process, still in April!

Taking all this into consideration the best time for you to start looking for a job here is from July to March but I would say that the ideal is to begin in August or September. Then you’ll have time to do the interviews, receive proposals and have some time to prepare all the paperwork before the beginning of April and the company can submit your request on the first day of April.

About the paperwork, it’s really not that complicated. In my case the immigration lawyers that the company hired to handle my case did most of the work. my college in Brazil already provided translated certificates so I sent those. If your college can’t do that I think the lawyers can take care of the translation too You’ll have to send them a few personal documents as well as your college certificates and sometimes letter of recommendation of previous employers. If you can provide copies in English (some colleges can do that) it’s better but I think the lawyers can take care of that if you can’t.

After you submit all the paperwork the lawyers will start preparing your package to submit to the immigration department and then it’s a waiting game. In my case the paperwork was submitted in June and was only approved in mid October. The department of immigration might also ask for additional documentation from you or your company but I think this is normal.

Also keep in mind that there’s something called premium processing. If the company is willing to pay the department of immigration a larger fee then your application will be processed faster than with a traditional fee. Keep that in mind and ask the company that will hire you if they’ll pay for this or not. It might not make sense if your application is submitted in April but (assuming the cap has not been reached) it might be better if you submit in June.

Enough about the visa, first you need to get an offer from a company willing to sponsor your visa. So let’s talk about that, always remembering that this reflects my experience and might vary. So let me tell you my story.

At some point, I think it was 2010, I decided it was time for me to get a job in San Francisco. I’ve been in the city many times before and always had a good time there. It’s been a dream of mine to move since I first visited when I was 21.

So at this point I decided to start expanding my online presence. I created this blog and started writing a few articles. I was always a big fan of Ray Wenderlich’s website so when he put out a request for writers I jumped on the opportunity and got accepted based on my previous writing.

I also started going every year to WWDC, Apple’s developer conference, help in my beloved San Francisco every year in June. In 2011 I met one of the founders of a startup that was looking for developers in an event after WWDC and he liked me. We scheduled a remote interview when I was back in Brazil.

The interview was my first in many years and my first real technical interview. In Brazil we’re usually hired based on relationships alone and that’s how I got all my earlier jobs. It consisted on a series of typical CS problems where I had to either write some code to solve a problem or at least describe how to solve the problem in a high level description. I also had some talks with the other founders, both technical and non-technical.

I passed the interview and they made me an offer to start working remotely with the possibility of sponsoring a visa if they liked my work.

They did and we started the visa application process. At one point in the process the immigration department asked for some information about the company (financials, office information, etc.) and more documentation about myself (they wanted the translated version of my college papers). At this point the company started to have funding problems and I decided it was best to halt the process as I didn’t want to have a rejected visa in my name and I knew the company would not reach the financial bar the immigration department required.

So, back to almost square one I started spreading the word to some people I knew in the US that I was looking for a job that could sponsor me a visa. One of these people is a guy (now my friend here) that contacted me because of my tutorials on Ray’s website and that I was mentoring via Skype from time to time. He told me he knew some people at a gaming company and that they were hiring and willing to sponsor H1Bs. He referred me and I started the interview process.

I had a few talks over Skype with a recruiter on the company that explained to me how the process works. One hint I’ll give here: ask as many questions as you can at this point. Sometimes the recruiter might not be able to say a few things but it’s OK to ask. So ask how many interviews you’ll need to do, what’s the nature of the interviews, what’s the pre-requisite for all of them etc. In my case for example they did tell me I will need to show good knowledge of CS fundamentals, like data structures, algorithms, runtime complexity, etc. As my college days were far behind me and my previous jobs really never required these things I had forgotten most of these topics and had to go back to my college books to study them and get a refresh. Have I not asked and not studied I can say I would not have passed the interview process.

So I did a series of interviews over Skype with different people. As I explained above every interview is highly technical and you have to show good knowledge of CS fundamentals as well as good analytical thinking and problem solving skills. All interviews were conducted over Skype and with a website that allows 2 people to code at the same time online so my interviewer would see my code live and I could also get the questions in written form.

This is very specific to Pocket Gems and might vary. In this case for example no knowledge of iOS is necessary as we believe people with good CS knowledge can learn new languages and get used to different platforms easily but I know some companies might ask platform or language specific questions.

Again, during these interviews is a good idea to ask a lot of questions and clarifications about the problems you’re given. The interviewer might not be able to answer some questions but most questions are not only answered and help solve the problem but are seen by your interviewer as a good sign that shows you pay attention to details and ask the right questions when given a problem.

After 3 Skype interviews they decided to make me an offer. After a few days I accepted and started to rush my documentation because we knew the cap was about to be reached. As I said above we submitted the paperwork a few days later (this is not typical, it usually takes a few weeks to prepare all the paperwork but the company really wanted to avoid the cap so the lawyers were rushed a bit) and the rest you already know.

A few things you should keep in mind when you get an offer:

  1. The salary is a yearly salary. This is different to what Brazilians and maybe others are used to, as we’re usually given monthly salaries. And this value is before taxes so you’ll have to account for that to get the amount of money you’ll actually receive at the end of the month. Again, I’m not a tax expert but a good estimate is that you’ll have to pay around 30% in taxes! But this varies. Ask the company.
  2. Ask about benefits. The most important one is health insurance. I think most companies do pay for health insurance but you should ask what kind of insurance it is and research online about it. Also, differently from Brazil you’ll need 3 kinds of insurance: dental, eye and medical. Personally I would not get a job that didn’t pay for my insurance as this is extremely expensive here, especially for a personal plan. Also if you’re married or have kids check if the company covers them too.
  3. Check where you’re going to live. First it’s ideal if it’s a place you already know and like. But most important is to check how much it actually costs to live there. For example it might be better to get a job that pays 100K in Austin than a job that pays 110K on San Francisco or New York. These 2 cities are crazy expansive right now so your salary better be good.
  4. Try to get remote work as a contractor while you wait for your visa. Depending on the company and on how impressed they are with you they might do it and I think it’s totally worth it as you’ll already start making friends even before you move. This can be extremely valuable when you actually move.
  5. Check the company as much as you can. As I said above, the first company that hired me really didn’t have that much money in the bank and the founder could not get more funding and so my visa was almost denied not because of me but because of them. You also don’t want to join a company that might lose funding in a few months and will let you go as your visa is tied to your job. If you do lose your job you have some time to look for another job that can transfer your visa to them. How much time? I have no idea but what I read online ranged from zero to 90 days. I’d say that as long as you don’t have a job on an H1B you’re at risk of being deported even though the risk might be small. Again, no expert here so don’t trust me on this…
  6. Be sure the product is something you’ll enjoy working with. Moving to a different country is not easy so you better damn way at least like what you’re doing for over 40 hours a week. That does not necessarily mean you have to like the product itself. I’m not much of a gamer but I really enjoy coding them. They present programming challenges no other product does and also are very visual, something I enjoy a lot.

As this article is going in an opposite direction now it’s time to talk about how and where to apply.

I said in Brazil most hiring is done using personal connections so you might think I meant to say this plays no part here but this is not true. It’s always better if you know someone in the company or someone that knows someone. My point when I said that is that this is not the only part of the process. Even if you know someone and get a good recommendation you’ll still have to go through the whole interview process. But it’s always good to be referred.

So if you’re interested in applying to a certain company, try to use your LinkedIn contacts to get introduced. If you’re in a conference try to reach out to someone that works there, introduce yourself and let the person knows you’re interested in working for them. Doing this shows real interest and that you’re a pro-active person.

But if you can’t do any of this just apply online anyway. Get your LinkedIn in order, make a nice one or two page resume and a cover letter and just apply.

Regardless of the method you use to apply, make it clear from the beginning you’ll need the company to sponsor you a visa. A lot of companies do this but some might not be willing to and this will save everybody a lot of time. Don’t think you’ll go through the whole process and impress them enough that they’ll change their minds about visas because you’re so awesome. Sorry, won’t happen.

One small thing that can help here: get a phone number in the US and transfer all calls to your cell in your country. The easiest way to do this is buying a Skype number and setting up call forwarding but there are cheaper websites out there. I use Sonetel for example and it’s really easy and cheap to buy a number in the US and set it up so it forwards all calls to your cell phone. This is not to fool people into thinking in the US and you should make it clear when you give out the number that this just transfers to your local cell, but it makes it easier for a recruiter to call you and you won’t need to keep your Skype running all the time.

Well, there you have it, I think this answers the question. I’d also like to write about how it is to actually move from a place you spend most of your life to a foreign country as this is very important but I’ll leave this for another post. All I’m going to say is, if you’re moving to San Francisco, don’t worry, this place is amazing and we got you covered! Seriously, people are amazing here and you won’t have many problems when you move.

One last thing: Pocket Gems is still hiring software engineers and for other positions too. Check out and shoot me an email if you’d like to apply for a job working with awesome people like many others and myself!

Python’s documentation at your fingertips

The pain

As I mentioned in my last blog post, I started learning Python some time ago and fell in love with it. But, as with any new programming language, I spent a lot of time browsing through the documentation to find out the correct name of the method to find a substring within a string for example. Is it indexOf, find, rangeOfString, locate??? Off I went to the (very well done, btw) online Python docs to look for the right method in the string module.

In the meantime I also fell in love with another tool: Dash. If you’re an iOS developer and don’t have Dash you should go get it right now! It’s one of the most useful tools in my tool belt at the moment. And for the very lowprice of free you just can’t go wrong. As I said to the author, I’d gladly pay good money for it.

Dash’s first use for me was to browse inside iOS’ documentation. I never liked XCode Organizer’s documentation browser. The search is incredibly slow, pages take forever to load, there’s no easy way to jump to a method’s documentation, you name it…

Dash is the complete opposite:

  • The search is amazingly fast;
  • Once you find the class you’re looking for it builds a list of all the methods so you can quickly jump there;
  • If you click on a method’s declaration it automatically copies it to your clipboard. It’s now a breeze to create delegate methods;
  • You can search inside a class documentation just as easily;

Not to mention other very nice features, such as a collector of code snippets and as a text auto expansion tool. Even if you’re not an iOS or OSX developer Dash can be a great tool just to collect snippets and auto expand text. Enough praise, let’s go back to the problem.

Dash can be used to browse through any documentation that has been bundled in Apple’s docset format. When I learned this, one of those flashbulbs appeared over my head and I immediately started to scour the web for a version of Python’s documentation in docset format only to find that such a thing either does not exist or is very well hidden.

Using the snake to help the snake

So I decided to take matters into my own hands and build this documentation myself. Using Python, of course.

With the help of Dash’s author I learned how to build docsets that were easily searchable inside Dash. After a few hours of coding, reading Apple’s documentation and building regexes to collect all the information I thought should be in the documentation, I managed to create a docset, configure Dash to use it and, voilá, instant Python documentation search!

I managed to generate documentation for Python 2.7.2 and for 3.2.2, the latest versions at this time. Click the links to download and feel free to use them. You’ll have to unzip the file and put the resulting .docset bundle somewhere. I would recommend putting them in ~/Library/Developer/Shared/Documentation/DocSets as this is the place XCode will look for when searching for docsets. I believe Dash will look in this folder too or at least is the default folder for when you try to add new docsets to it.

And I’m proud to say that Dash’s author will bundle this bundle (the 2.7.2 version) with Dash’s new version. If you want to have documentation for version 3.2.2 you can download my version and use that instead. Oh, and before I forget, Dash now comes with a lot of docsets created by the author. Currently Android, Java, Perl, Python, PHP, Ruby, jQuery and Cocos2D docsets are included.

Plus, I’m adding this script to my PythonScripts github repository. Feel free to grab it, fork it, use it and improve it. I love getting pull requests with improvements on my repos.

To use the script you’ll need to have the BeautifulSoup module installed (sudo pip install beautifulsoup4). I use it to parse the documentation’s HTML so I can find all interesting methods, functions, classes to grab. I also had to add anchor tags to all html files so Dash could jump to the correct place inside the HTML.

This is what you have to do to generate a new version of the documentation from the HTML version:

  1. Download the documentation for the version you want here. You should download the zip file for the HTML version of the docs.
  2. Expand the documentation somewhere.
  3. Open terminal and cd to the folder where you expanded the docs.
  4. Run the script from this folder.
  5. The script will create a python.docset bundle with all the necessary files.
  6. Move the python.docset bundle to some folder. Again, I recommend ~/Library/Developer/Shared/Documentation/DocSets
  7. Use it!


This is my first contribution to the Python community. I hope you like it and that using Dashwith this docset makes your lives easier. It has certainly made mine. If you have any comments about this docset leave a comment below.

The docset does not have the complete documentation (it does not have the tutorials and howto for example) as I personally use it only as a reference. But, as I said before, feel free to change the script to include more stuff and make a pull request so I can add it to my repo.

Python script to compose iPhone marketing images

An excuse to use Python

I consider myself a polyglot programmer and try to learn a new language every now and again. My latest is Python and I can say that I love it! It’s easy to learn, has a HUGE library, very good support for lists and dictionaries and is flexible enough to be an ideal choice from small scripts to large frameworks.

I’ve been doing a lot of Python scripts for all sorts of small jobs and my latest script might be of interest to some of the readers of this blog.

I was creating a press kit for my latest app and I wanted to get some screenshots of the app and compose them inside an iPhone image.

So, I wanted to get the image on the left and create the one on the right:


Of course I could use PhotoShop or Pixelmator and do this all manually but I had 8 of these and a craving for Python. So I fired up CodeRunner and started writing a Python script. Turns out that, with the use of the incredible Python Imaging Library it was a breeze.

So I decided to open source this so everyone could use it: Feel free to grab it, fork it, improve it and make a pull request.


This script uses PIL. The easiest way to install PIL is using pip:

sudo pip install PIL

Using the script

To use the script place your screenshot files in the same folder as the script and the EmptyiPhone.png file and run it. The script will create new files with ss_ prefixes for all .png files found in the folder.

Changing the iPhone image

To use a different image or to adapt the script for an iPad screen for example, change the EmptyiPhone.png image or the name of the image in the script and change the coordinates used to paste the original screen shots. I plan to automate this step by analyzing the image and finding the transparent rectangle in the middle but so far this is a manual step.

Hope you like it.

Replicating TweetBot’s Alerts and Action Sheets

How it all started: A love and hate story.

Since the first time I had to use an UIActionSheet or UIAlertView in an app I disliked the way it was implemented. It was a pain if you had two kinds of alerts in the same class for example as everything is done by invoking a delegate method. I also disliked the fact that the code that should be executed in the event of a button was almost always in a separate place in your source code. The code needs a lot of constants, switches and you need to tag your UIAlertView…. I hated it!

But on the other hand they are very useful for asking information in a modal way so I kept using them when it was appropriate.

And then I found PSFoundation a library of very nice iOS utilities by Peter Steinberger. It has a LOT of useful utility classes but two stood out for me as a relief for my hatred: PSActionSheetand PSAlertView.

To see an explanation on how they work, take a look at the blog post that originated PSActionSheet and inspired PSAlertView: “Using Blocks” by Landon Fuller who apparently hates UIActionSheets as much as I do.

Since I found these classes I’ve incorporated them into every one of my projects. And when I took over as lead developer for Arrived’s iPhone app I took a few hours right at the beginning of the project to convert every UIActionSheet and every UIAlertView into a BlockActionSheet and BlockAlertView (I renamed the classes to make the name more memorable and descriptive).

A new kind of hate

Arrived has a very distinctive look. I love the design of the app, with lots of textures, the red carpet over the pictures on the stream, the custom buttons, the title, even the tab bar is customized to look unique. So, in the middle of all this very nice color scheme whenever I had to use an Alert View or an Action Sheet I was punched in the face by a freaking blue Alert! How I hated those Alert Views ruining the look of the app.

And then I got TweetBot. What a nice app, what a unique interface and…. what the hell? They customized their Alert Views! Super cool. Right then I thought: I gotta have this….

Hate is a very effective motivator

We then decided to terminate every instance of default Alert View and Action Sheet. Since I already had every call to those wrapped with my own Block* classes, it was just a matter of changing these classes and everything should work as before, but with a much better look.

And so we did it and we decided to open source it. And let me tell you they look great!


But before I send you over to our repository to download this baby, let me tell you how they work and what are the current limitations.

Using the library

If you’re familiar with the above mentioned PLActionSheet and PLAlertView you will have no problems adjusting to these classes as I didn’t change their methods at all. I added some methods to make the class even better but everything that used the old classes worked with no modifications.

You’ll need to import 6 files into your project: 4 for both classes (BlockActionSheet.(h|m) and BlockAlertView.(h|m)) and 2 for another view that serves as the background for the alerts and action sheets, obscuring the window to make it look very modal and make the user focus more on the dialog (BlockBackground.(h|m)). You’ll never have to use this third class directly though as everything is handled by the two main classes. You’ll also need the image assets that we se to draw the view, such as the buttons and background.

To create an alert view you use:

BlockAlertView *alert = [BlockAlertView alertWithTitle:@"Alert Title" message:@"This is a very long message, designed just to show you how smart this class is"];

Then for every button you want you call:

[alert addButtonWithTitle:@"Do something cool" block:^{
    // Do something cool when this button is pressed

You can also add a “Cancel” button and a “Destructive” button (this is one of the improvements that UIAlertView can’t even do):

[alert setCancelButtonWithTitle:@"Please, don't do this" block:^{
    // Do something or nothing.... This block can even be nil!
[alert setDestructiveButtonWithTitle:@"Kill, Kill" block:^{
    // Do something nasty when this button is pressed

When all your buttons are in place, just show:

[alert show];

That’s it! Showing an Action Sheet works almost exactly the same. I won’t bore you here with more code but the repository has a demo project with everything you’ll need.

You can even have more than one cancel and destructive button, despite the fact that the methods are prefixed set and not add, but this is because I wanted to keep the same names I used in the original libraries where you could only have one cancel button. Feel free to rename those if you don’t have any legacy code as I had.

Another cool thing we did was add an animation when showing and hiding the new views as Tweetbot does. This is another area where you can go nuts and add all kinds of animation.

The look of the alerts and action sheets is made of a few assets for the background and the buttons so if you want to change the color scheme all you need is a little time to change ours. Check out the included assets and just change them if they don’t work for you.

The only limitation these classes have so far is with device rotation. As Arrived only works in portrait this is not a problem I needed to solve. And it’s not that trivial because you’d have to reposition the buttons and text because the window now has a different size and the alert might be too tall to hold a long message in landscape. And you might need to add a scroll for some action sheets too. But feel free to fork and fix this!

Gimme that!

You can get the everything you need from out GitHub repository. There’s a demo project with lots of buttons to trigger alerts and action sheets until you get sick of them.

Another thing that’s included in the project but that you might need to roll your own are the graphical assets for the buttons and backgrounds. You can use ours but they might not fit the look of your app.

Now go get the project and have fun with it. Feel free to fork and add pull requests so we can incorporate your changes for everyone.

The PhotoAppLink library story


Today is the day os the official launch of the PhotoAppLink library. The library is a joint effort of me and Hendrik Kueck from PocketPixels, maker of the ever top selling ColorSplash. We have a website if you want to know the latest about this. This post is to tell the story behind this.

The problem

Since the first version of my first iPhone camera app, Snap, I wanted my users to be able to share their annotated images with as many services as possible. I did the obvious: Twitter, Facebook, Tumblr and I still want to add more to this list.

But one thing was still not possible: how could I share the images with another app? How can I send an image from Snap to Instagram so that users can apply some filters and share? How could I send it to AppX so that users could add filters, frames and a lot more that AppX might offer?

And also the other way around. What if a user takes a picture with AppX and wants to add some text on top of it? AppX might not offer this, but Snap does. Wouldn’t it be nice if AppX could open Snap with an image, Snap could add notes to it and then send the result back to AppX? There is just no way of doing this. Or at least not until right now.

The proposal

What I wanted (you’ll understand the past tense in a moment) to propose was really quite simple, but quite ingenious (or so I thought).

The iOS API allows us to Implement Custom URL Schemes. I wanted every camera or image processing app to implement a custom URL scheme so that we can all exchange images with each other.

So I hashed up a way to Base64 encode an image and send it to another app using these custom URL schemes. It worked well in some tests so I wrote a library and started sharing it with some top devs in the photography section of the app store.

Some people didn’t even respond, but Hendrik Kueck from PocketPixels, maker of the ever top selling and very fun ColorSplash responded telling me he had a similar idea over a year ago (and I thought my idea was so original…) but he didn’t get a lot of people on board so he kinda forgot about it.

He sent me his code and I think my email made him regain his enthusiasm so we decided to iron out a few missing things in the library that would make adoption much easier and try to get more people on board.

So when I checked his library I saw that his idea, even tough it was using custom URL schemes, was to use a custom pasteboard to pass data around from one app to the other. WAY better than Base64 encoding everything. What a revelation that was.

So I threw away most of my code and ported my app to use his code in about an hour. It’s called PhotoAppLink (mine would be called iOSImageShare, even his name is better… damn….) and he even registered a domain for it.

How does it work?

There’s a Readme file with code and a step by step tutorial on how to implement this into your app but first let me explain how it works. It’s really very simple.

When you want to send an image to another app, we just create a custom pasteboard with a common name and paste the image NSData (jpeg encoded to a very high quality) to this pasteboard. We then open a custom url registered by another app.

The system then opens this other app that knows is being called to open a custom url. The app checks the shared pasteboard, gets the image from there and then… well, that’s up to the app. In the case of Snap, I’ll open the annotation screen so that the user can add notes. In ColoSplash it will open the app and prepare it for processing just as if you were getting an image from your Camera roll.

So, all very simple, right? Well, if you’re paying attention there’s one thing that’s missing here: how do I know what URL to open?

Who wants to play?


So, you decided to implement PhotoAppLink on your “soon to be the best” camera app but you feel lonely. You don’t really know to what other apps you can send your images to. Well, not to worry my friendly app, we got a solution for you.

We will host a plist on our website called photoapplink.plist. This file will contain information about all compatible apps. If you implement PhotoAppLink in your app you just have to send us an email about it with all your info and we’ll add your app to this file.

Our library then simply downloads this file and uses UIApplication’s canOpenURL: to check if the app is installed. The library will also download all the compatible apps’ icons (and cache it) automatically on the background.

When your user wants to send a picture to another app you can use an UIViewController from the library that handles everything, from showing compatible apps to sending your image.

But if you don’t like the interface we built or if it doesn’t fit your app, no problem. The library can provide all the information about compatible apps so that you can build your own interface. Or just change the interface we provide to fit your app.

That’s it?


Well, not quite. If you’re still not convinced that implementing this in your app was a good idea I think this will make you change your mind.

When we present a list of compatible apps to the user we can check what apps the user has installed but we also now have a bunch of apps that the user does not know about. So, in our UIViewController we have a button for “More apps”. This button will present a list of all the compatible apps the user still doesn’t have in a nice table with a nice button to get the app.

This button will open the AppStore app so the user can get this app immediately! AND it uses a link with your Linkshare site ID so you even get a commission on the sale.

So, your app can get more revenue selling other apps AND your app can now be discovered by users of other PhoneAppLink compatible apps. How cool is this!!!!!

And, again, if you don’t like our interface, just change it or roll your own using the information gathered by the library.

Let’s play?

Convinced? Great. There’s a very quick tutorial on how to implement PhotoAppLink in your app. It will take you about an hour if you use the controls we provide there and there’s a test app you can use to test interaction with your app. The whole process should not take more than 4 hours, with testing!

Check it out and let’s start playing together!

Two small iOS tricks


Well, I got back from WWDC and there was just too much to do, so I’ve been neglecting my blog a little bit. But since I already missed one post on AltDevBlogADay and today I was about to miss another (3 strikes and I’m out???) I decided to get something quick but maybe useful for all you iOS devs out there.

I’ll share two tricks I recently had to use for Snap. One I learned during one of the labs at WWDC and it’s an old but very hidden trick that’s not covered by NDA so I can share. The other is something that I hacked on my own but got somewhat validated by an Apple engineer I showed to, so not I feel more confident in showing this in public…

First trick

Snap is a camera app and my users were asking me to implement zooming. I studied the API a bit and there was no way to tell the camera to zoom. What I came up with (and Apple’s engineers that work with this API have said it’s the right thing to do) was to change the frame of the camera preview layer so that it would “bleed” out of the view and then give the illusion of zoom. After taking the picture I have to crop but that’s another story.

My problem was that when I changed the frame of the layer, even though I was not applying any animation, the system would animate my change and the interface felt a little weird. It felt like the zoom was “bouncing”. It’s hard to explain but the result was not good and I could not figure out how to remove this animation.

During one the the labs I asked an Apple engineer about this and as he was about to go looking for the answer another attendee from across the table who overheard me said he knew how to do this and very quickly guided us to the documentation where this little trick is hidden.

So, inside the “Transactions” section of “Introduction to Core Animation Programming Guide” there’s a header that says “Temporarily Disabling Layer Actions“:

[CATransaction begin];
[CATransaction setValue:(id)kCFBooleanTrue
// Do what you want with your layer
[CATransaction commit];

So there you have it. Very obscure but it works. You can also change the duration if this is what you want:

[CATransaction begin];
[CATransaction setValue:[NSNumber numberWithFloat:10.0f]
// Do whatever you want with your layer
[CATransaction commit];

Second trick

Another problem I faced with Snap was that, even though the saving of the image takes place mostly on the background using block and GCD (more about this on another post…), in order to compose the image I still had to make the user wait. I can do it on the background but that would involve copying a lot of memory and I didn’t want to do this on the iPhone. And it’s fast enough not to be a problem but I didn’t like that the interface froze when I was composing the image with the notes and the user was just staring at an unresponsive device.

So, I decided to use MBProgressHUD to at least show something to the user. My problem was that I had a lot of calls to the method that generates the image and the caller expects to get the UIImage back. As the calls are made on the main run loop and the method takes too long the interface froze and the HUD would not show.

Yes, I could have refactored everything to use GCD and callback blocks but I had to release an update and didn’t have much time. So, I decided to pump the main queue myself:

    // Will return my image here
    __block UIImage *img = nil;
    // To indicate the work has finished
    __block BOOL finished = NO;
    // This will execute on another thread. High priority so it's fast!
    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
        // Call my very long method and indicated we're finished
        img = [[self annotatedImage] retain];
        finished = YES;
    // This will probably execute even before my image method above
    MBProgressHUD *hud = [MBProgressHUD showHUDAddedTo:view animated:YES];
    hud.labelText = label;
    // Get the main run loop
    NSRunLoop *runLoop = [NSRunLoop currentRunLoop];
    // Run until finished
    while (!finished) {
        [runLoop runUntilDate:[NSDate dateWithTimeIntervalSinceNow:0.01f]];
    // Hide the HUD
    [MBProgressHUD hideHUDForView:view animated:YES];
    // Return my image that is now composed.
    return [img autorelease];

Even though it’s kind of an ugly hack it can be used in situations where you’ll really have to make the user wait and a synchronous call on the main thread is what you already have or what it’s faster for you to implement.

I don’t recommend this for every situation. There are situations where this might lead to a deadlock in your app, so test this a lot if you decide to use it. It worked for me. And, as I said, I showed this to an Apple engineer during one of the labs and he said it was a good solution to the problem.

I have a fork of MBProgressHUD and have used these principles to build a category for MBProgressHUD that does this AND can even be cancelled by the user. This version is even hackier so I won’t go into it right now for lack of time but if someone wants to read about it just ask in the comments and I’ll do it.

An afterthought about WWDC

One of the things I learned during last year’s WWDC is that, even though the sessions are great, the labs are even better. And as the sessions are not opened for questions and they’re usually out on iTunes for you to watch less than 2 weeks after the event this year my main priority were the labs. I went to every lab that I could think of and even went to some twice.

So, my advice to any WWDC attendees: Forget the sessions and go to the labs! The sessions you can watch later at home but you only have access to these great engineers that are building the stuff we use for these 5 days, so make the most of this. Even if you have a stupid question, don’t be shy and go to a lab and ask the question. These guys are great and are always willing to help. This is consulting from Apple that is well worth the US$1600 bucks. I dare to even say that 1600 is cheap! (don’t tell Apple though…)

I even bumped into a guy that helped me  last year that remembered me, my problem and tried to help me again this year even though my problem now was not at all related to his expertise. Nice guy. Thanks again Sam. See you next year.

Snap iPhone camera app

Oh, and have I mentioned that you should get Snap for your iPhone? Check it out. You don’t know how useful and fun your iPhone camera can be until you have Snap!

Well, that’s it. Sorry for the quick post. I’ll come up with something better next time. And if you have any comments on this post please leave them here and I’ll try to respond and correct whatever you guys come up with.

Getting metadata from images on iOS


My latest post was on how to write image metadata on iOS. I got a lot of good feedback from it so I think people are interested in this kind of stuff. I got 33 people watching my repo on GitHub. Cool, I got code stalkers!

One thing was missing from the post though: how to get metadata from existing images. In this post I’ll show you a few methods to do this as well as how to use my NSMutableDictionary category to do this.

Getting images using UIImagePickerController

If you’re getting images from an UIImagePickerController you have to implement this delegate method:

- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info

In iOS 4.1 or greater your info dictionary has a key called UIImagePickerControllerReferenceURL (for images from the library) or UIImagePickerControllerMediaMetadata (for images taken from the camera). If your info has the UIImagePickerControllerMediaMetadata key, then you just have to initialize your NSMutableDictionary with the NSDictionary you get from the info dictionary:

NSMutableDictionary *metadata = [[NSMutableDictionary alloc] initWithDictionary:[info objectForKey:UIImagePickerControllerMediaMetadata]];

But if you took an image from the library things are a little more complicated and not obvious at first sight. All you get in a NSURL object. How to get the metadata from this?? Using the AssetsLibrary framework, that’s how!

NSMutableDictionary *imageMetadata = nil;
NSURL *assetURL = [info objectForKey:UIImagePickerControllerReferenceURL];
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library assetForURL:assetURL
    resultBlock:^(ALAsset *asset)  {
        NSDictionary *metadata = asset.defaultRepresentation.metadata;
        imageMetadata = [[NSMutableDictionary alloc] initWithDictionary:metadata];
        [self addEntriesFromDictionary:metadata];
    failureBlock:^(NSError *error) {
[library autorelease];

One caveat on using this: because it uses blocks, there’s no guarantee that your imageMetadata dictionary will be populated when this code runs. In some testing I’ve done it sometimes runs the code inside the block even before the [library autorelease] is executed. But the first time you run this, the code inside the block will only run on another cycle of the apps main loop. So, if you need to use this info right away, it’s better to schedule a method on the run queue for later with:

[self performSelectorOnMainThread:SELECTOR withObject:SOME_OBJECT waitUntilDone:NO];

To make things easier, I’ve created an init method to my category:

- (id)initWithInfoFromImagePicker:(NSDictionary *)info;

You just have to add the NSMutableDictionary+ImageMetadata.h to your file and then use:

NSMutableDictionary *metadata = [[NSMutableDictionary alloc] initWithInfoFromImagePicker:info];

And you’re done! The category checks for the iOS version and for the correct keys and does everything for you. Just be careful about the issue with blocks I mentioned above.

Reading from the asset library

Well, I kinda spoiled the answer to this one already. If you’re using the AssetsLibrary to read images, you can use the method above, with the same caveat: it might not be accessible until some time after the method is called.

Again I created an init method in my category:

- (id)initFromAssetURL:(NSURL*)assetURL;

Using AVFoundation

iOS 4.0 introduced AVFoundation. AVFoundation gives us a lot of possibilities to work with pictures and the camera. Before iOS 4 if you wanted to take a picture you’d have to use an UIImagePickerController. Now you can use AVFoundation and have a lot of control over the camera, the flash, the preview, etc…

If you use AVFoundation to capture photos you’ll probably use AVCaptureStillImageOutput‘s:

- (void)captureStillImageAsynchronouslyFromConnection:(AVCaptureConnection *)connection 
                                    completionHandler:(void (^)(CMSampleBufferRef imageDataSampleBuffer, NSError *error))handler

The completion handler gives you a CMSampleBufferRef that has the metadata. But how to get it out f there is not clear from the documentation. It turns out it’s really simple:

CFDictionaryRef metadataDict = CMCopyDictionaryOfAttachments(NULL, imageDataSampleBuffer, kCMAttachmentMode_ShouldPropagate);

Since CFDictionaryRef is toll free bridged with NSDictionary, the whole process would look like this:

CFDictionaryRef metadataDict = CMCopyDictionaryOfAttachments(NULL, imageDataSampleBuffer, kCMAttachmentMode_ShouldPropagate);
NSMutableDictionary *metadata = [[NSMutableDictionary alloc] initWithDictionary:(NSDictionary*)metadataDict];

At the risk of repeating myself, I again created an init method for this:

- (id)initWithImageSampleBuffer:(CMSampleBufferRef) imageDataSampleBuffer;

Wrapping up

So, there you have it, now you can read and write metadata.

What’s still missing are some methods to easily extract information from this dictionary. I have already created another method to extract the CLLocation information from it. As I now have a way to get and set this information I even converted it to a @property on the category, giving our NSMutableDictionary a nice way to access the location using the dot notation.

It’s very easy to add getter methods for every property but I have not done so yet. Feel free to fork my repo on GitHub and send pull requests for me to incorporate.

I also added another method to add the image’s digital zoom as the next update of Snap will have digital zoom and I’m writing this information to the pictures as well.

Snap iPhone camera app

Oh, and have I mentioned that you should get Snap for your iPhone? Check it out. You don’t know how useful and fun your iPhone camera can be until you have Snap!

Adding metadata to iOS images the easy way

Does it have to be so hard?

Are you writing a camera app or image editing app for iOS but are clueless on how to add geolocation to your pictures? Baffled by the lack of information in the otherwise very thorough XCode documentation? I feel your pain my friend. Or actually, felt, cause I got your meds right here.

When developing Snap I wanted to add this feature so that it could actually replace the built-in camera app. And since the built-in camera app adds geolocation, along with a lot of other metadata to the images, Snap had to do this too.

I present to you my NSMutableDictionary category that will solve all your problems. Ok, maybe not all, but the ones related to image metadata on iOS anyway.

For those with no patience, here’s the GitHub repo: The repo contains an XCode project that should compile a nice static library for you to use on your projects. I plan on add a lot of utility classes here, so you might just want to pick and choose whatever you need to use instead of using the whole library.

The category is easy enough to use if you check out the code, but I’ll explain a few things on how to use it for those that never had to deal with image metadata on iOS before.

Who is this metadata person anyway?

For those of you that have no idea what I’m talking about, image metadata is most commonly known as EXIF data, even though that’s slightly wrong because EXIF data is only one type of metadata that can be embedded in an image file. My category deals with EXIF metadata, as well as with TIFF and IPTC metadata, depending on what kind of information you want to add to the image.

For example, the Original Date of an image can be embedded inside an EXIF property or inside a TIFF property. My category knows this and if you want to embed this date it will set both properties for you.

You can see all this metadata using most image viewers. On OSX, if you press cmd-i on the Preview app you can see an image’s metadata.

How does it work on iOS?

iOS SDK 4.1 introduced some methods that allowed an app to write image metadata in an image. One example is ALAssetsLibrary’s:

– (void)writeImageToSavedPhotosAlbum: metadata: completionBlock: completionBlock

That takes a NSDictionary as the metadata source. What the documentation doesn’t explain (or at least I could not find) is how this dictionary should be. I googled a lot and found some examples online that I used as a starting point for the category (sorry, can’t remember most of them…).

Turns out that this dictionary consists of a lot of other NSDictionaries with key/values that are dependent on the type of metadata you’re adding. You can find all the dictionaries that go inside this dictionary (I know…. even I’m getting confused with so many dictionaries…) in theCGImageProperties Reference of the documentation.

I’ll try to explain with an example. Say you want to add a “Description” property in your image. This property sits inside the TIFF dictionary. So, in order to add this information to your metadata dictionary you can use this code:

NSMutableDictionary *tiffMetadata = [[NSMutableDictionary alloc] init];
[tiffMetadata setObject:@"This is my description" forKey:(NSString*)kCGImagePropertyTIFFImageDescription];
NSMutableDictionary *metadata = [[NSMutableDictionary alloc] init];
[metadata setObject:tiffMetadata forKey:(NSString*)kCGImagePropertyTIFFDictionary];

Why am I using NSMutableDictionary? Well, in this example you really don’t have to, but say you want to add another TIFF property to your metadata, with NSMutableDictionary you can just add another key/value to the tiffMetadata dictionary. If you used NSDictionary you’d have to create a new NSDictionary with the old key/values plus the new key/value. Not cool….

Adding geolocation is even harder. Geolocation has it’s own dictionary with a lot of possible values that are NOT explained in the documentation. The best information I found about this was in this StackOverflow question that I used as the basis for my implementation.

Please, help, I don’t want to do this…

The NSMutableDDictionary+ImageMetadata category takes all this complexity away from your code. To add geolocation to your metadata dictionary, all you have to do is this:

NSMutableDictionary *metadata = [[NSMutableDictionary alloc] init];
[metadata setLocation:location];

Where location is a CLLocation instance. That’s it. My category will create the appropriate dictionary and add it to your NSMutableDictionary with all the appropriate key/values. I’ve implemented some other interesting setters and there are some helper methods that make it very easy to add methods for other properties:

- (void)setLocation:(CLLocation *)currentLocation;
- (void)setUserComment:(NSString*)comment;
- (void)setDateOriginal:(NSDate *)date;
- (void)setDateDigitized:(NSDate *)date;
- (void)setMake:(NSString*)make model:(NSString*)model software:(NSString*)software;
- (void)setDescription:(NSString*)description;
- (void)setKeywords:(NSString*)keywords;
- (void)setImageOrientarion:(UIImageOrientation)orientation;


After setting all your properties, you can call ALAssetsLibrary’s – writeImageDataToSavedPhotosAlbum:metadata:completionBlock: or – writeImageToSavedPhotosAlbum:metadata:completionBlock:. using your very special NSMutableDictionary and you’re all set!

Getting metadata

There’s another hard to find issue with metadata and that’s getting it from an image you just took using UIImagePickerController or an AVCaptureStillImageOutput. I’ll deal with this problem in another post but rest assured that out friendly category will help you a lot too. (UPDATE: The reading part in on this blog post)

Can I use this?

Yes, use it, fork it, spread the word. And if you make any improvements to your fork, or if you found a bug or a better way to do things, please send me a pull request so that I can incorporate your improvements into the main branch.

And if you really want t help me out and get a nice app at the same time, get Snap for your iPhone. Best 2 bucks you’ll spend today!

That’s why I do what I do

As a software developer I get a lot of pleasure when a code that I wrote, all those ifs and methods and classes, turns into a real thing, something on my computer screen, doing what I have summoned it to do.

But I get even more pleasure when I see someone using these things, when I meet someone that tells me how wonderful it is, when I see someone I never met before interacting with it with pleasure, when I see a kid using it so naturally.

And I thought it could not get better than this, but it can.

I released Snap a few weeks ago and I was getting some attention for it. AppAdvice already wrote a review of version 1.00 and when I released version 1.01 we decided to do a giveaway together.

Christine Chan asked her readers to describe in the comments how would they use Snap if they got the promo code and one guy caught my attention with something I never thought it would be used for:

James F Bagnell Jr MS
March 26, 2011 • 2:55 pm“I am a behavioral specialist who works with Autistic children. As part of their Social Skills training I’ll use this app to take snap-shots of the items they use and the people they work with to set up a picture schedule that will lay out their daily routine (which always includes reading them the AppAdvice daily app). Since the parents of the children I work with are less fortunate, I try and make use of the resources I have available. Thanks for keeping me updated on new apps like this one! AppAdvice is the best.”

That really moved me. When the giveaway was over I asked Christine if this guy was one of the winners. If not I was gonna give him a promo code anyway. As it happens he was one of the winners and Christine was nice enought to send another email asking James to contact me, which he did the next day. I asked him if it was OK for me to write about it and he said yes.

Of all the uses I imagined for the app, using it to help autistic children was not one of them, so I was curious as to how it was being used to this end. James explain on another email:

“Autistic children learn in differing ways than those without. One of the ways I teach my clients is through the use of Social Scenarios. These are little homemade picture books that use photographs of everyday people, places, and events that Autistic children encounter throughout their day. One example (which I am working on right now) is a social story that lays out the daily schedule of one of my clients in order to help him transition. The first picture shows his alarm clock displaying the time he is to wake up. The next photograph shows his dresser and the heading reads “this is my dresser, I keep my clothes in here to wear. What will I wear today?”. This continues through the daily rituals of the child and helps him to transition better by understanding his expectations.”

I researched a bit and this is called “Social Stories“. According to wikipedia:

Social Stories are a concept devised by Carol Gray in 1991 to improve the social skills of people with autism spectrum disorders (ASD).
Social Stories are short stories written or tailored to an autistic individual to help them understand and behave appropriately in social situations. The stories have a specifically defined style and format.
They describe a situation in terms of relevant social cues, the perspective of others, and often suggest an appropriate response. They may also be used to applaud accomplishments; roughly 50% of all Social Stories are targeted to be used for this reason.

So, using Snap I took some pictures around my house and played with it a bit:

These were done using version 1.10 of Snap that’s waiting to be approved by Apple as I write this. Hopefully this new version will make the app even better for people like James. I think James might like the AirPrint capability of this new version. This might allow him to make his social scenarios even easier: just snap a picture, write what you want and send it to an AirPrint compatible printer.

Before Snap I didn’t know about Social Stories and had no idea Snap could be used to help autistic children. So now I’m wondering, what else is Snap being used to? The giveaway gave me a lot of examples that I haven’t thought of and I’m sure there’s a lot more uses for it that will never cross my mind.

And this is what I love doing what I do. The bits and bytes are just the raw material I use to build software that can be used by people in ways that are far beyond my imagination.

So, think you can do something cool with Snap that I haven’t thought of? Leave a comment and you might win a copy of Snap.