Tiny trick: Swap SurfaceTexture between TextureViews.

A SurfaceTexture connects a GLProducer with a GLConsumer on Android, it's easy to swap the producer, since you just disconnect the producer on any thread, and it's easy to swap the consumer if you are in control of the thread that originally generated it. So here is the trick to swap out the consumer while preserve the producer connection: 

  1. Save: SurfaceTexture texture = textureView.getSurfaceTexture();
    This saves the texture, make sure onSurfaceDestroyed() returns false. 
  2. Detach: textureView.getParent().removeView(textureView); 
    This will call SurfaceTexture.detachFromGLContext() for you on the right thread.
  3. Attach: newTextureView.setSurfaceTexture(texture);
    This will call SurfaceTexture.attachToGLContext() for you on the right thread.

After this, your producer that is connected to the texture (MediaCodec decoder / encoder, a render thread, etc), will be producing buffers to the textures to be displayed on the new TextureView. Of course, this trick also works for SurfaceView. 

Simple foreign Android codebase hands on guide (Part 1)

So you just joined a team, or you are working on a part of code that you forgot the file/class names and you want to do a new feature or fix some bugs and not sure where to start. The codebase is huge (millions lines of code)..you have no idea where to start. This is a good flow that I usually follows and tells my mentees to do:

Working on a bug: 

Android Example:  I clicked on something on the UI but it didn't work. I expect a UI change.

1. Look for state changes: is this a reproducible bug?  Is there any changes in the UI that you can identify such as text or color? Is it a reaction of user action such as click or touch? Or is it a model change or device state change?    

Identify the state changes and find where those state changes occur. This will nail you into a specific place: click/touch listeners; model setters; or behavior receivers.

Android Example: Look for the string that you clicked on; that will give you a clue to the class or layout xml file. Now find the best places to put break points.

2. Did you click on the right item? Is the right click listeners getting triggered?

Android Example: Look for where the listeners are assigned and put a breakpoint on them. See if they are triggered.  Is your listener set correctly? If they are not, you may be looking at the wrong place: either the wrong click listener is triggered or the wrong view is getting the callback, or you have overridden some of the expected responders.  -- Fix the layout / UI responder/listener may fix your problem.

3. Now your trigger is happening correctly. Then the action it triggered: is it getting processed properly? If you are still having problem, you will trace how the action is handled or if the response from finishing handling it is done properly, but at this point, you have already found the entry points and reset should follow general debugging methodologies. 

Follow this flow will help you fix most of the bugs you find on a codebase that you are not used to with very high efficiency. :) 

 

 

 

Back to the startup world

Though I was not impacted by Twitter's recent house clean (fact: 0 of Vine Engineering was impacted because, I decided it is time for me to get back to the startup world again. 

I joined almost three years ago Vine when there were only 8 people and no heat in the office. I started working on our Android app with Sara not knowing if Vine will even still be here in a year. Here I am with almost 50 people and a 6 person Android team. It’s been a great learning experience some of the best engineers and product I have met. I am really proud to call myself a Viner and feels really fortunate that I got to work with each one of you. I made some of the best friends here. Felix, Ryan G, Sara, Matt, Dman, Ben, and countless others… Vine is and will be one of the greatest entertainment platform in the world! 

I really can't express how much I appreciate the rest of the team, it's been a really great three years. 

I have been exploring different NYC startups for a while now. There are really an increasing number of amazing startups around. I wasn't sure if I wanted to start one myself at first. After much thinking, I think the top priority if I can learn from someone great and keep myself surrounded by amazing people so I can prepare myself a bit more before starting my own. 

I'll be joining a startup called MoLabs as their first engineer. It was the earliest stage one I was exploring and I can see myself learning so much from Jim, who was the CEO of MoPub...and one day found great companies.

Many exciting news to come soon.

Reducing build times by adopting buck

Note: I was asked to do a topic discussion at NY Mobile Forum the past weekend at Facebook and I decided to do one on developer productivity hacks, the content of this article was used to ignite the conversation. 

This is part of Vine 's Engineering Blog series. 

Since we started working on Vine for Android, we have used the following tools for development:

  • Android Studio (Intellij before Android Studio 0.1.0 was released)
  • Gradle build system
  • Crashlytics and other third party plugins
  • Jenkins for CI

In this post, we’ll talk about how we reduced our iteration time during Android development by adopting Buck along side of our current Gradle structure.

We like Gradle because it supports different configuration targets well, and it’s very easy to config and have everything merged for you. For example, currently we have to support multiple apks that are compiled from twenty different library modules that work on different platform versions, different remote targets, different app stores, and different test scenarios. Gradle scales well and makes all these configurations easy to build and test. It also works well with Maven and has native support within Android Studio. On top of that, we also have several custom build steps and plugins during different phases of the build process for some of our builds.

Since we first released Vine for Android, two things have happened: (a) the app has gotten much bigger as we add more assets and (b) build time has increased as more build steps are added.

Currently a clean build takes about four to five minutes, and a one-line change on the end of the dependency chain takes about one minute to build and install. That’s with “–parallel –daemon –offline” enabled. It takes even more time without that flag. Across our small (and growing) Android team, we spend  a few hours of dev time every day on build and install.

We tried to figure out why it was so slow by using “–profile” on Gradle, and it turns out that for a one-line change, “dex” (the merge of all the pre-dex-ed modules into “.dex”) and “install” (sending the apk to a device via USB and install the app) takes about 90% of that minute. And if we can fix that, we should be good.

Turns out, Facebook’s Buck build system has an “ExoPackage” mode that does the exact thing. Buck does many tricks, such as smart-compiling dependents only when the method signature has changed, which results in fewer files needing to be compiled and a faster dex that is O(nlogn) instead of O(n^2). The thing that makes it really fast, though, is the multi-dex trick: it will only “dex” the module that have changed (not merging all existing ones), and it transfers only that modified dex file (instead of the entire APK) to the device when you make a single line of change, which is what usually happens during development.

Knowing this, we wanted to move to Buck. It was intimidating because we depend on Gradle for many things, so we decided to make Buck live along Gradle and see how things go. With that approach, the task would not be as intimidating and could easily be broken down into a few steps:   

  • Grab all the remote jars and aars and make them local
  • Change assumed final R values to be assigned during runtime
  • Create a merged AndroidManifest.xml file for Buck
  • Create Buck config for our debug config

The whole process took about three dev days, and the results are amazing. Our clean builds now take about 40 seconds, and our one-line changes take about three to ten seconds, depending on the module you’re changing.  

Doing it this way we are able to use Buck for development of features, not break Gradle for Android Studio and still use Gradle for release builds to different stores or manufacturers. The con is, of course, every time we lose maven for dynamic dependency upgrades, and the changes for manifest files for Gradle modules need to get merged to the respected Buck config files. We opted to do it this way because we think that chances that we will be modifying our AndroidManifest now is rare, and we rarely update our dependencies.

The next step is to make the sync process between Gradle and Buck more automated.  We are quite happy with the current flow, and we think you should consider doing same if you also have a slow Gradle configuration that is slowing down your iteration cycles.

Vine Loop Counter View

Recently Vine has launched Loops, and one of the fun parts that I took on was building loop animation that shows as Loops and thought I'd share it as part of Vine's open source efforts: 

CounterView.java 

Usage

setKnownCount() where you can give it the current count, the time the current count is obtained, and a velocity that the count should be increasing at. 

setExtraCount() where you can give it extra counts independent of the known counts variables.

Animation Modes

The gist version supports three different parameters for AnimationModes:

continuousAnimation: if the increments will be continuous, if this is false animation will run once to the current number and then stop until the next time the count is changed.

pedometerAnimation: if true, the digits will move up 1 by 1, instead of skipping when animation increment is > 1 for that digit.

alphaAnimation: if true, the alpha will be changed as percentage of the animation completion. 

The default on the gist: non-continuous, non-pedometer, alpha-on, which is the way it is on Vine for Android as of Version 2.1.0.

Other Customizations

You can of course change and play with the digit spacings, animation durations, typefaces with either the given methods or change the constants. Test with the usual velocities that you wanna give and you will see interesting effects. I had a demo app working with all the different variations but I think I'll leave it to the reader to play with. 

How it works

On count invalidation, count will be checked against the current count and adjust the digit sizes. The current count based on the starting count and starting time and extra count will be calculated and produce individual states for each digit. Each digit keeps track of its own animation state. And then onDraw of View will be triggered via view invalidation.

On View invalidation, onDraw will simply draw out each digit one by one using the state it is in, and then post a runnable to calculate the updated states again. Note that frame rate is adjustable as well via a constant as the delay of re-calculating the states. 

-

Comments? Bugs? Suggestions? Feel free to leave it here or on the gist. 

Allocating Camera memory faster on Android Part Two

Part 1 Part 2

In Part 1, I talked about how to avoid the GCs so you can get reasonable speeds when using the frames from Android Camera's onPreviewFrame method and process them without losing any, it was basically as follows: (let's call this Method A)

1. Get faster memory allocation with tricks mentioned for small pieces of memories (bytes[]). The number of byte[] needed for the slowest device is the maximum number of frames to process (N);

2. Put the frame from onPreviewFrame to another thread.

3. The other thread process the data, and then give it back to the Camera. 

It turns out, there is another way to do it that's much faster, Method B:

1. Get faster memory allocation with tricks mentioned for small pieces of memories (bytes[]). The number of byte[] needed for the slowest device is about 10.

2. Put the frame from onPreviewFrame into a shared large ByteBuffer Queue that's big enough to fit maximum number of frames to process (Generate this Queue with ByteBuffer.allocateDirect(N*singleFrameSize) and then give back the buffer.

3. Another thread will manage the queue independent of the onPreviewFrame thread (process frames, drop frames when on pressure, etc.)

Comparison on cold launch: 

Method A: Requires generation of N byte[] in Java, and a total of N * singleFrameSize bytes. 

Method B: Requires generation of 10 byte[] in Java, (N + 1)* singleFrameSize in memory block allocation, and a total of (N + 11) * singleFrameSize bytes. 

If done right, Method A can trigger lots of GCs, average about (0.1 * N), so for 180 frames and assuming only the last 20% would GC with the large chunk first trick will make it about 3s. Method B, the allocation will basically be about 0.1 * 11, making it only use about 1s time. 

 

Allocating Camera memory faster on Android Part One

Part 1 Part 2 

One thing that was learned while building the capturing part of Vine for Android was dealing with all the raw buffers in order to satisfy the stop motion requirements. (According to Instagram, they were able to use the native MediaRecorder with 700ms+ delay on start time and a minimum duration, but Vine can't afford that in order to do stop motion) And because we can't use MediaRecorder, there are other libraries that are linked in order to do the encodings. 

In order to use the raw buffers, setPreviewCallbackBuffer  will be used in place of setPreviewCallback and addCallbackBuffer must be called with a minimum number of frames added prior/during to preview. This way buffers will not be generated during run time so that there is no lag during recording (which causes serious frame drops). For Vine, we take the frames, put them on a concurrent queue, another thread will take the buffers from the queue, process that frame, and then put the buffer back to Camera. So for a 6 second 30fps video, a maximum of 180 frames will be needed if the user records one single long clip. There goes the problem, 180 frames of raw bytes is pretty big to allocate at first as each frame is about 1MB big to allocate them at once will likely cause OOM and turns out to be really slow. But let's look the iteration that we did to minimize the problem as well as how to make everything else faster. 

-- 

Naive solution:  Add 180 frames prior to startPreview, guarantee 180 frames for all phones.  Doing all the allocations and initialization of classes and objects. when user starts recording. 

Result: GC_ALLOC happens, OOM happens on some phones, and frag increase of heap causes the allocation to go up to 10 - 30 seconds on certain phones. Takes 1 - 2 seconds before allocation happens. 

 --

First thing I tried was to identify the bottlenecks during recording so that we don't need that many frames. Can process be faster so that we don't need that many frames?

Processing a frame really consists of four small steps so it was not hard to time them. 

(all the times are relative to the paragraph and to each other instead of real times since it varies by device) 

1. Convert a NV21 frame to a Bitmap for manipulation. (Time: 50x)

2. Doing bitmap manipulation on the converted Bitmap. (Time: 5x)

3. Encode the bitmap. (Time: 20x)

4. Write to the container.  (Time: 1x)

 

Optimize processing:  

1. If conversion in Java takes about 50x, can we do it better in native? Or is there a better solution. It turns out, if we do color conversion on GPU via an intrinsic RenderScript (super optimized conversion script), we can make it go from 50x to 1x with just a few lines of code. Unfortunately, this is Android 4.2+ only at time of writing but a support library may come in the future to back port this to older Android devices. 

2. All the bitmap manipulations were separated (rotation, clip, inversion), if we use a single Matrix, time was modified from 5x to 2x. 

3. Encoding, there isn't that much we can do here since the encoding algorithm is already optimized. If we use MediaCodec, time would be down from 20x to 10x, but this is 4.1+ and there is no sign that a support library may support this in the future. 

4. Writing it to the container is super fast, nothing to be done here for now. 

What did this was that we can now cut down from 180 requirement to a 140 requirement on certain devices, and 120 requirement on 4.2+ devices. (We have a device profiling system for this). 

 

 

Improved processing solution:  Add 140 frames prior to startPreview, guarantee 140 frames for all phones.  Doing all the allocations and initialization of classes and objects. when user starts recording. 

Result: GC_ALLOC happens less, OOM happens on some phones but less, and frag increase of heap causes the allocation to go up to 5 seconds on certain phones before they can start recording. Takes 1 - 2 seconds before allocation happens. (The big improvement here happen because GC on the last 40 frames is usually the slowest). 

This is still unacceptable. 

--- 

Improve allocation speed: Lying to get more memory is good. 

Why does GC happen? Why is growing heap even needed if we know how much we need?  

GC happens when the allocated heap is hitting about 70% capacity. And heap grows in frag because we only asks for a small byte[] at a time.  

It turns out, right before adding small buffers, I can add the following code to make it 100x faster:

temp = new byte[140 * requiredSize * 1.5] ;

temp[0] = 1; 

temp = null; //Explicit. 

This makes GCALLOC happens much much less (sometimes only once) and no more heap growing more than once. 

 

Result: GC_ALLOC happens much less, OOM happens faster, allocation time to go up to 2 seconds on certain phones before they can start recording. Takes 1 - 2 seconds before allocation happens. 

Much better, but can we do better? 

--- 

 

 

 

 

The rest of the improvements that we did we around using a service that maintains class loadings, using a bytebuffer queue when they restart recording so that we don't have to allocate more buffers, eventually bring the OOMs down to a very very small number, and allocation times to about 1.5s. The details are not important but what's important is that there was so much room for improvements and at many places that we did not expect to make a huge impact. Timing the execution and using GMAT like tools were very important at first for us to identify the bottle necks. 

 

Is Android fragmentation an issue?

For consumers? No. 

Consumers want the best phone they can afford. Android does exactly that by providing lots of options in a the entire price range. Do they really care if the phone have 512MB of RAM, 1.4Ghz Duo CPU, or another phone with 1GB RAM and a 1.9Ghz? They can't really tell the real benefits provided by the different phones. And they certainly don't care of a specific app is not on a certain phone if it means to pay $100 more (provided that most of the most popular apps are compatible with most phones).

For developers? Yes, but not really.

No because most apps will work just fine if you follow the best practices for Android. Unless you are doing something wrong, you won't run into any issues. There is a lot of gotcha's but answers are mostly on StackOverflow. Porting apps to different Android devices are not nearly as hard as coding in another platform.

Yes, and if you run into weird problems, there is not much help you can get, especially if you are using the newer APIs. After developing a dozen apps in different categories, SleepBot and Vine accounts for almost all the hardest problems because they interact with Camera, MediaRecorder, MediaPlayer, and opengl components. On the other hand, Squarespace and the other apps had no problem adopting all the platforms and devices, and at most you will be dealing some mistakes made on database related issues. I remember that one of the features for Vine encountered a different problem on each flavor of S2s because some functions were not implemented according to the SDK. This has gotten significantly better with 4.1+ devices, which is why Instagram is only 4.1+ when video was released as well. 

That being said, if you are not using any special hardware components or the more specialized APIs, there is nothing to worry about. Making a todo app is just as easy on Android than on iOS. 

How to make in page margin animations smooth for ViewPager pages

tl;dr modify the setOffScreenLimit dynamically. ​

In order to keep a constant length ViewPager scrolls smooth, setOffScreenLimit(page.length)​ will keep all the views in memory. However, this poses a problem for any animations that involves calling View.requestLayout function (e.g. any animation that involves making changes to the margin or bounds). It makes them really slow (as per Romain Guy) because the all of the views that's in memory will be invalidated as well. So I tried a few different ways to make things smooth but overriding requestLayout and other invalidate methods will cause many other problems.

A good compromise is to dynamically modify the off screen limit so that most of the scrolls between pages will be very smooth while making sure that all of the in page animations smooth by removing the views when the user. This works really well when you only have 1 or 2 views that will have to make other views off memory.

@Override
public void onPageScrollStateChanged(int state) {
if(state==ViewPager.SCROLL_STATE_IDLE){
if(mViewPager.getCurrentItem()==INDEX_OF_ANIMATED_VIEW){
mViewPager.setOffscreenPageLimit(1);
}else{
mViewPager.setOffscreenPageLimit(OLD_PAGE_LENGTH);
}
}
}

Cooper is far from dead

Yesterday, Cooper Union's administration has ​announced that Cooper will no longer offer 100% free scholarship starting with the incoming freshman in fall 2014. It will still be need-blind admission, but it will charge those who can afford it up to 50% of the full price (around $20,000). 

Within one hour, my Facebook feed has blown up with almost every classmate of mine posting something related to it and how "Cooper has ​died". 

I am not a fan of the decision that the administration has came up (I am just like everyone else who wish the school can be free forever), but I certainly do think that this will eventually happen, and it is one of the best ways that it could have turn out. If this can last forever, that we can still pay for those who cannot afford, why not? I understand that the whole vision of Cooper's mission that "Education should be free as air and water" but in difficult times, why not let the rich chip in a little bit? ​The bloated administration is another issue: this is not something that is easy to cut down. It's just unrealistic. 

RIP the Free Cooper Union....

But Cooper has not died. Yes, admission rate will go up a bit. Yes, there will be those who would otherwise attend Cooper attend MIT or Harvard. MIT is not free, so why is it so competitive? It's not because it is free, it's because it has the reputation to be one of best schools in the world and producing the best talents in the respected fields. Therefore, theoretically, if Cooper alums like me and you keep doing the best we can do succeed in our fields, we will be able to compensate some of the "reputation" that Cooper have lost from the scholarship reduction.  ​

Only with alum and faulty support may we one day get the full scholarship back. Or maybe we will never get it back, but we still have to keep the place that has given us such a unique opportunity, to stay alive.​ 

Another selfish wish I have is that future alums, who may have to pay some of his/her tuition, will also appreciate Cooper and feel grateful that he/she didn't have to pay 50k/yr for college. Cooper is not purely defined by it's free tuition, but also the quality and the experience it gives. 

​Go start your company fellow Cooper kids, let's make the next Facebook and get the free tuition tradition back. :) 

Becoming a better programming

I think I'm going through different phases of becoming a better programmer these days and the feeling is really good.

Sources to improvement:

1. School: CS background, knowing how to break a problem down and analyze the strategy to solve it.

2. Working on side projects for fun in school: free learning, time management, experience the joy. This source also get you the most rewards. 

3. Working with a large team at a big company: you learn bureaucracy and how large companies maintain code and process. Y

4. Working at a mid size company with a small team: you learn how to manage your own time since you won't have a manager. You learn to maintain your code better as the company grows.

5. Working at a tiny company you started: stressful, but rewarding. At this point you just write things that works, but the good habits developed from #4 really helps.

6. Working as an independent contractor: time management+fast coding+ coding for yourself+half managed style.

7. Back to working at a startup with a rockstar team. :)

Endroid's local comm done. :)

Since school has ended for me last month, I started on Endroid, an experimental project for me to build a self-ware robot (project plan).  

So far, the easy mechanical parts are done:

IMG_20130120_212006.jpg

The Vex 2-wire motors can be controlled via Arduino fairly easy with Vex Microcontroller 29 which turns Arduino's PWM (pulse width modulated signals) into DC signals so you can plug them directly into the PWM analog ports and write the programs using Servo.h class. 

The are only two parts that I think were tricky to me since I am a beginner in making physical toys: first is the power source, you can't power the motors using a 3V or 5V motor, so what you can do is power the motors separately using a 7.5V or 9V battery, the reason for this goes will be explained again later as well.

The second part is the value for moving forward/stop/backwards when using Servo.h in Arduino, the following values were tested:

const int MF = 20;  // angle that moves motor forward
const int MB = 138; // angle that moves motor backward
const int MS = 91; // angle that stops the motor

(well, switch to #define for you code of course)

Then for the Aruidno and Android connection, you will first have to download and follow the instructions for Demokit for ADK 2.0. 

A stripped down version below that allows you to control the robot moving in different directions via an Android device (3.0+) can be found below. I have omitted layout xmls and manifest since they are fairly straight forward. Credits also goes to various online sources and Stack Overflow. 

Looking forward

2012 is almost over.  Many key life events has happened to me, graduation from college, finding my first full time job (Squarespace), getting funded for my startup (NYU to SleepBot), and ended with a death of a close family member. It was a very busy year and I certainly hope it had ended better.

Looking forward to 2013, I will try to make the best of it with Jane, together. Now that I am done with school, I can focus more on SleepBot and continue preparing to start on my robotics stack.

If the world does not end in a week, next year will be the best.

Can't live without git any more

Thanks Linus for writing the stupid content tracker [1] for the 1 millionth time. ​Just pinned down the exact line that were committed into the repo from two weeks ago within an hour that may take hundreds of hours of work if it was not version controlled using git. (even if it was using SVN will use 2-5x the hour) :)

​[1] try "man git" on a POSIX system.

Why I choose Cooper Union

Four years ago, I choose Cooper Union over all the other schools....including UC Berkeley and various top schools. Today, I'd say that it was probably one of the best decisions I have ever made. To honor my decision I agreed to write a small piece on "US News Best College 2013 Edition". Hope it will help Cooper to continue to get best applicants. :) 

Not to my surprise, Cooper is once again #1 in the North and the computer/electrical engineering program ranked #2 in the country, with one of the lowest acceptance rates in the country (on par with Harvard, MIT, Stanford..).

Congrats! 

33a2158c066211e2827612313814176c_7.jpg