I have been focused primarily on the Android platform for about 5 years, so the Intents framework is pretty much a part of my DNA at this point. When Twitter went aflame with “iOS has Intents! #WWDC” tweets during last week’s keynote, I sat up and took notice. Could it be? What follows are the things I have observed about what the two frameworks now have in common, and where they divide.
The BeagleBone Black supports fastboot using its on-board eMMC. However, support it not built into the software shipped on the boards, and there are a few build tricks you need to manage in order to get it all up and running.
The Drawable framework in Android is a neat and really flexible way to create portions of your UI. Many times have I been able to simplify the view hierarchy or required resources just by getting creating with what a Drawable can do. Recently, I had a need to place text into a Drawable so it could be inserted in places where the framework only allows Drawables to go. So I created TextDrawable and though I’d share it.
Since its inception, one of the key elements of the Android platform has been the tools provided in its resource framework to select appropriate assets tailored to the user’s device type. Over time, this system has been added to here and there to include differentiators for different screen sizes, resolution densities, presence of different hardware items like keyboards, and so on. Undoubtedly, the most common use of this system is to create differentiation for different screen configurations (both size and resolution density). As the Android device landscape has grown, developers like myself have found themselves struggling to keep up with the additions Google is making to the SDK to allow applications to properly adapt.
A common request from designers that I work with is to draw text in an application that contains an inner shadow. Here are some tricks to do that using the Android SDK.
Something that has bothered many Android developers (including myself) since the dawn of the soft keyboard is having a way to track the simple event of when a user is done editing the text in a text field. I’ve created a simple widget that attempts to track this state.
With the introduction of the new ContactsContract to replace the previous Contacts API in Android 2.0, came a drastic increase in both functional capability and API complexity for applications wishing to interact with the Contacts Provider. This level of access allows applications to make very detailed and specific additions and updates to the Contacts/People database on the device; however it can make some of the common simpler operations seem like they require an unnecessary amount of code.
When I started developing applications for Android in early 2009, I never expected that I would ever own a rooted device. I felt that, as a developer, I needed to be testing my applications in an environment that best emulated my users, and rooting would compromise that environment. However, recently I have been forced to take a second look at that opinion in light of some unfortunate barriers Android developers face today.
Recently, a post was made by Kirill Grouchnikov on his blog providing details on how the Android Market app uses a synchronized scrolling technique to take a certain view from inside the ScrollView content and let it float to the top of the screen when it would have otherwise been scrolled off-screen. This post is meant to elaborate on his details with a splash of sample code.
Developers often have a need to create rows in a ListView that have multiple interactive locations that the user can touch, instead of just one single clickable row. This is a pattern that even Google has employed in apps like the DeskClock. DeskClock’s Alarm tab display each list item with a small toggle button inside, used to enable or disable each alarm. In addition, the remainder of the list item is also touchable and takes the user to a screen to edit the alarm parameters.