Monday, June 6, 2011

THE AGILE BLINDSIDE

THE AGILE BLINDSIDE

Esther | Tuesday, May 31st, 2011 | 3 Comments »
(this article originally appeared on gantthead.com)
Agile project management depends on transparency and feedback. Visibility into the product and process is built in with iteration reviews and retrospectives. Task walls and Kanban boards make progress (or lack of it) and bottlenecks obvious. Stand-up meetings seek to raise impediments to management attention. But are managers ready to hear about these problems?
If organizations want to realize the benefits of agile methods, managers need to act on the problems that bubble up from teams, deal with unexpected events on projects and proactively find and fix problems that derail projects. Unfortunately, many managers behave in ways that communicate they aren’t interested in solving problems–and ensure they won’t learn of problems until it’s too late.

“Don’t bring me problems, bring me solutions.”

I suspect that managers who repeat this sentence believe it will encourage people and teams to solve problems on their own. But people don’t approach their managers with problems they know how to fix and can solve (or believe they can solve). The problems they raise are ones they don’t know how to solve, don’t have the organizational influence to solve or need some help to solve.What really happens when managers talk this way? Team members struggle in isolation or ignore problems, hoping they will go away. Managers who tell people not to bring them problems ensure that they won’t hear about small problems that are part of a larger pattern.

“Failure is not an option!”

Managers who rely on this exhortation ensure they won’t hear about risks and issues. The phrase sends the message that success is a matter of character and will rather than the result of planning, observation, re-planning and course correction when something unexpected occurs.
Will and character are assets in any endeavor; however, they are not sufficient for success. Success requires removing impediments and proactively finding and ameliorating problem situations. Failure may not be an option that managers like, but it is always a possibility; ignoring that fact forces problems underground and makes failure more likely.
“The thought that disaster is impossible often leads to an unthinkable disaster.”- Gerald M. Weinberg

“Get with the program or get off the bus!”

When managers give the impression that their minds are already made up, subordinates are less likely to bring up weaknesses, problems or alternatives. People fear that their concerns won’t be heard. Worse, they fear (often with reason) being labeled as naysayers or whiners. Discourage people from shining the light on problems and they’ll stop.
But managers don’t need to be obvious in their discouragement; more subtle actions can also plug the pipe.

Interrupting

Interrupting a person who brings unwelcome news makes it harder for that person, who is already facing a difficult conversation. People interrupt for many reasons–excitement, the desire for more details, etc. But to the person being interrupted, a stream of interruptions can feel like an interrogation. Interrupting implies impatience–and that anything the interrupter has to say is more important than what the other person was about to say.

Ignoring Intuition

A couple of years ago, a friend felt uneasy about an action his manager was taking. He couldn’t quite put his finger on why he felt concerned, but his feeling strong enough that he went to his manager–who dismissed his intuition, telling him, “Come back when you have some facts and we can have a logical argument.” But the situation outpaced data gathering and blew up.
Asking for excessive proof and demanding data ensures that a whole class of complex and systemic problems won’t come to attention early.

Non-verbal cues

I coached a manager who furrowed her brow and tapped her pencil when people told her about problems. She was thinking hard. They thought she was irritated with them for brining bad news.When there’s a problem on a project, the earlier you know about it, the more options you have to mitigate the impact, make a course correction or re-set expectations. But you won’t hear about problems if you plug the information pipeline.
“The problem isn’t the problem. Coping with the problem is the problem.” – Virginia Satir
As much as we might wish there were no problems on projects, that’s not the way the world works. Problems are a normal part of life. Managers need to know about problems so they can see patterns, find options and steer projects.
Here are three things you can do to make sure your information pipeline flows:
Tell people you want to hear about problems. Sounds simple–and it is. Assure people that you understand that nothing goes exactly as planned and you don’t expect perfection. You may not want every problem dropped at your doorstep to solve–but if you act as if having problems is a problem, you won’t learn about impediments and issues when they are small.
Learn how to listen. At a recent talk, a participant asserted that people from [fill a non-western country here] don’t know how to say “no.’” This is not true. What is true is that many Americans don’t hear it when people from different cultures say “no.” The same is true for hearing about problems. If you want to build an early warning information system, you need to learn how to listen. That means refraining from interruptions. It also means listening for less obvious cues and what isn’t being said. When there’s a long hesitation preceding a positive statement, there’s more to learn. If you don’t hear any mention of risks or issues, delve deeper.
Teach people how to speak up. I don’t want to clog the information pipeline by implying that I only want to hear about problems that have ready solutions:
The most important and dangerous problems don’t have an obvious fix.Here’s a framework that has worked for me. It provides useful information and an agreed-upon format that reduces the psychological barriers to raising issues:
“Here’s my hunch…” This makes it explicit that I don’t require excessive proof.
“Here’s why you need to know about it…” This signals that I recognize that I don’t know everything.
“Here’s my data…” If there is data, it’s good to know. And I’ve heard about intuition being born out enough that “I have a bad feeling about this” is good enough for me.
“Here’s what I’ve considered or tried…” I do want people to think about the issue and I want to hear about their thinking. Problem solving is improved by multiple points of view.
Standard agile practices such as visible charts, frequent demonstration of working product, and retrospectives are all ways to make both progress and problems visible. But if people don’t feel safe to bring up issues, you won’t hear about them until it’s too late. If you take the actions outlined here, it will be easier for people to bring up problems to you. Problems are part of life–and projects. Pretending otherwise creates a culture for them to hide and fester.

Sensing the World from an Android[tm] Device by Dale Wilson, Principal Software Engineer Object Computing, Inc. (OCI)

Sensing the World from an Android[tm] Device

by
Dale Wilson, Principal Software Engineer
Object Computing, Inc. (OCI)

Introduction

Calling it a mobile phone is like the blind man finding the elephant's tail and thinking he's found a snake. Sure it fits in your shirt pocket, and it will let you talk to you grandmother. But that just the beginning of what this device knows how to do. It knows which way is up. It knows which way is north. It knows where it is on the planet. It knows how high it is above sea level. It knows how bright the light is around it. It knows if there is anything close to its screen. It knows how the birds can get revenge on the pigs who stole their eggs. It is truly amazing.
To know what's going on in the world around it, the mobile device has sensors. Programming is needed to set those sensors up properly and accept the data they produce. That's the topic for this article -- how to write programs that use those sensors. To be more specific, this article focuses on Android-based devices. Equivalent articles could be written about iPhone[tm]Windows 7 Mobile[tm]WebOS[tm], or BlackBerry[tm] devices, but that will have to wait for another time.

Development Environment

If you are writing a program for an Android device you are probably working in Java, and you almost certainly have downloaded and installed the free Android Software Development Kit. After that you need to understand the Android architecture. There are many ways to acquire that knowledge including taking OCI's Android Platform Development Course. This article will assume you have a basic understanding of Android programming.
At the time of this writing, there are two free IDEs in common use to develop Android applications, Eclipse and Jetbrains' IDEA Community Edition. Both of them require freely available Android plug-ins to support Android development. The example program for this article was developed using IDEA.

Getting Started -- Finding a SensorManager

In Android any access to a sensor starts by finding the system-wide SensorManager. This code assumes you are running in the context of an Android Activity. You need to import the Android support for sensors using the following statements:
import android.hardware.Sensor;
    import android.hardware.SensorEvent;
    import android.hardware.SensorEventListener;
    import android.hardware.SensorManager;
Now the following code will find the SensorManager:
SensorManager sensorManager =
        (SensorManager)getSystemService(SENSOR_SERVICE);

Finding a Sensor

Every model of Android device will have a different set of sensors. Before using any of the sensors described here, you need to check to be sure the sensor is available. You can use the SensorManager to discover what sensors are available on this device or to find a particular sensor. There are two methods available to help:
  • SensorManager.getSensorList(type_specifier) will return a list of all the sensors of a particular type.
  • SensorManager.getDefaultSensor(type_specifier) will return a single sensor - the default sensor for a particular type.
    Be sure to check for a null return! It means no sensor of the requested type is available.
The type_specifer is one of the constants defined in the Sensor class. From now on, I'll refer to this as Sensor.TYPE but this is not a class or an enumerated value. It is just a set of integer constants with similar names.
Possible Sensor.TYPEs include:
  • TYPE_ACCELEROMETER
  • TYPE_GRAVITY
  • TYPE_GYROSCOPE
  • TYPE_LIGHT
  • TYPE_LINEAR_ACCELERATION
  • TYPE_MAGNETIC_FIELD
  • TYPE_PRESSURE
  • TYPE_PROXIMITY
  • TYPE_ROTATION_VECTOR
  • TYPE_TEMPERATURE
When you are using the getSensorList() method, these values may be combined with an OR operator to include more than one type of sensor. There is also a special Sensor.TYPE, TYPE_ALL, that when used with getSensorList() will return a list of all the sensors on the mobile device. We will use that in our example program. For normal use, however, you probably want to call getDefaultSensor() with a specific type.
As an example, let's suppose you are writing a compass application. You want to use the magnetic field sensor in your phone to determine which way is north. To gain access to that sensor, use the following code:
Sensor magMeter =
        sensorManager.getDefaultSensor(Sensor.TYPE_MAGNETIC_FIELD);
There are some interesting points here. First of all there is no special Java class for the different types of sensors. The class Sensor provides a generic interface that is assumed to be flexible enough to support the requirements of any of the sensor types on the device.
The second point of interest is not obvious. There is a sensor type which is not included in the above list, TYPE_ORIENTATION. It is omitted from this list even though it's defined in the Android source code and documentation because Android has deprecated this Sensor.TYPE. Instead the SensorManager provides specialized support for orientation sensors. This orientation support is described later in this article.
Other features of the device such as the camera or gobal position sensor are supported through different APIs. They will have to wait for another article.

Using a Sensor

So now that we have a Sensor, what can we do with it? The surprising answer is, "Not much." The Sensor object itself serves two purposes.
It provides information about the sensor: Who makes it? How much power does it consume? How accurate is it?
It serves as a handle or identifier for the sensor if you want to talk about it to other parts of the system.
Notably missing from the sensor's interface is any way to read values from the sensor! As we'll see in a minute, that job is handled by the SensorManager.
Also missing is information about how many values the sensor provides. Does it give a single value or a 3-D vector? What units of measure does it use? And so on. For that information you need to go to the Android documentation. The SensorEvent page in particular tells you for each Sensor.TYPE how many and what types of values you can expect.

Reading Data from a Sensor

Assuming we don't really care who makes the sensor or how much power it consumes, but that we are interested in the values provided by the sensor, what's the next step? To read values from the sensor, we have to create an object that implements the SensorEventListener interface. [Aside: there is also an earlier, deprecated, interface named SensorListener - ignore it!]
We can then register our SensorEventListener and the Sensor object to the SensorManager. This does two things. It enables the sensor if it was turned off, and it provides a call-back function that the SensorManager can use when a new value is available from the sensor.
Here is the code that creates a SensorEventListener and provides implementations for the two abstract methods in the interface. These implementations just forward the call to corresponding methods in the containing object.
SensorEventListener magneticEventListener =
     new SensorEventListener() {
        public void onSensorChanged(SensorEvent sensorEvent) {
            // call a method in containing class
            magneticFieldChanged(sensorEvent);
        }

        public void onAccuracyChanged(
            Sensor sensor, int accuracy) {
            // call a method in containing class
            magneticFieldAccuracyChanged(sensor, accuracy);
        }
     };
Having created a SensorEventListener, the program should register it with the SensorManager using code like this:
sensorManager.registerListener(magneticEventListener, magMeter, SensorManager.SENSOR_DELAY_NORMAL);
This SensorEventListener will now receive events from the magMeter Sensor acquired earlier.
The third argument to SensorManager.registerListener() is a suggestion about how often the application would like to receive new values. Possible delay values from fastest to slowest are:
  • SENSOR_DELAY_FASTEST,
  • SENSOR_DELAY_GAME,
  • SENSOR_DELAY_UI, and
  • SENSOR_DELAY_NORMAL.
Faster speeds cause more overhead, but make the device more responsive to changes in the values detected by the sensor.

Housekeeping and Good Citizenship

Just as important as registering a SensorEventListener and enabling the sensor is disabling the sensor and unregistering the listener when it is no longer needed. Registering and unregistering should be handled as "bookends" so if you add the above code in your Activity's onResume() method (a good place for it) be sure to add this code to the onPause() method.
sensorManager.unregisterListener(magneticEventListener, magMeter);
That will ensure that the device is turned off -- prolonging battery life. Even though most sensors can be shared, unregistering the listener when it is no longer needed will also make sure the sensor is available to other applications that may run on the device.

Handling Sensor Accuracy

Notice that there are two callback methods defined in SensorEventListener: onSensorChanged() and onAccuracyChanged(). We will discuss onAccuracyChanged()first.
As the name implies, this callback occurs when something has increased or decreased the expected accuracy of the values produced by this sensor. The integer argument will be one of the following values - in order from least to most accurate:
  • SENSOR_STATUS_UNRELIABLE means that the values cannot be trusted. Something is preventing the sensor from acquiring accurate readings, so any reported values are just wrong.
  • SENSOR_STATUS_ACCURACY_LOW means that the values are correct, but not very accurate.
  • SENSOR_STATUS_ACCURACY_MEDIUM means the values are fairly accurate, but they are not the best that this sensor is capable of under ideal conditions.
  • SENSOR_STATUS_ACCURACY_HIGH means the sensor is producing the best values it can produce under excellent operating conditions.
Unfortunately, there seem to be some devices that always report SENSOR_STATUS_UNRELIABLE and others that always report SENSOR_STATUS_ACCURACY_HIGH. Don't place too much confidence in the accuracy status.

Using the Sensor Data

Finally, we are ready to discuss the interesting callback -- onSensorChanged(). The argument passed when this method is called is a SensorEvent structure. This structure contains real data from the sensor. Let's see what we've got.
I described SensorEvent as a structure. Technically it's a Java class, but this class does not have any useful methods - only public data members (fields).
The first field is one we've already seen: int accuracy will contain one of the same values as the argument to onAccuracyChanged() method. Thus for each sample of data from the sensor you know how accurate you can expect the data to be. For practical purposes you might be able to ignore the onAccuracyChanged()notice altogether and just use this value from the SensorEvent although you still must implement the abstract onAccuracyChanged() method.
The next field is Sensor sensor. This is the same Sensor that we used to register this callback. It is included in case we have common code handling the events from more than one Sensor.
The third field is long timestamp. It tells us when this event occurred. A timestamp in Android has a resolution in nanoseconds. It is based on the most precise timer on the device, but it is not tied to any particular real world clock. This timestamp can be used to calculate the interval between events, but not to determine the time of day (or month or year) when the event occurred.
The last field in the SensorEvent is float[] values. Yes, these are the values we are looking for. Most sensors will produce either one or three values in this array. The array in the SensorEvent is of a fixed size. It usually contains three floats even if the sensor produces fewer numbers. Be careful. Sometimes this array will have a size different from three.
The best approach is to use the Sensor.TYPE available via sensor.getType() to determine how many values are valid. The Sensor.TYPE also determines what units of measurement apply to this sensor. Fortunately Android has normalized the incoming sensor values so all sensors of the same type produce the same number of values using the same units.
Of course if you know what type of sensor you are working with you may not even need to check sensor.getType(). You can just write your code to handle the values you know you will receive.

Coordinates

Many of the sensors provide a three dimensional vector for the measured value. They provide value for the x-axis, the y-axis, and the z-axis as values[0], values[1], and values[2] respectively. Now all you need to know is the relationship of these axes to the actual device.
To simplify matters, all devices use the same axes albeit with different units. The axes are firmly attached to the device. If you move the device, the coordinate axes move right along with it.
Every device has a natural orientation. For most phones the natural orientation is portrait (taller than it is wide). For most tablets, on the other hand, the natural orientation is landscape (wider than taller). The axes for the device are based on this natural orientation.
The origin of the axes -- point (0,0,0) -- is in the center of the device's screen.
If you hold the device vertically in it's natural orientation the x axis runs left to right across the center of the screen. Positive values are to your right and negative points are to your left.
The y axis runs up and down in natural orientation. Positive values are up and negative values are down.
When you stare straight at the screen you are looking along the z axis. Negative values are behind the screen and positive values are in front.

Orientation

As mentioned previously, using the results from the orientation sensor directly through the SensorManager interface has been deprecated in Android. Instead, there is a different way to determine the orientation of the device.
What you want to know is not really how the device is being held, but rather what screen orientation Android is using. Interpreting the values received from the orientation sensor is only a small part of the puzzle. There are techniques an application can use to lock the screen into a particular orientation or to change orientations under program control regardless of the way the device is actually being held.
For the program presented in this article, we want to display the sensor data visually on the screen. In order to do so, the coordinates returned by the sensors have to be mapped into the coordinates used to draw on the screen.
The 2-D drawing coordinates are relative to the upper right corner. This means the Y values on the screen increase from top to bottom, but the Y values from the sensor increase from bottom to top. To reconcile sensor coordinates to drawing coordinates the Y values must be negated.
After this correction, the coordinates need to be rotated around the Z axis. Because the only orientations involve some number of ninety degree rotations, this can always be done by various combinations of swapping and/or negating X and Y coordinates.
Finally, the coordinates have to be scaled properly to match the size of the screen. There are a number of techniques for doing that including using the coordinate transformation matrix built into the Android View (which could handle the orientation-mapping, too), but the details are beyond the scope of this article. See the source code for one way to scale the coordinates.
But before any of this rotation can happen, the software needs to know the screen orientation Here's the code to find that out:
Display display = ((WindowManager) getContext().getSystemService(Context.WINDOW_SERVICE)).getDefaultDisplay();
    int orientation = display.getOrientation();
At this point the variable orientation contains one of the following values:
  • Surface.ROTATION_0
    This is the natural orientation of the device. Notice that for a phone this will be portrait mode, but for a tablet it will be landscape mode.
  • Surface.ROTATION_90
    This is the normal landscape mode for a phone, or portrait mode for a tablet.
  • Surface.ROTATION_180
    The device is upside down.
  • Surface.ROTATION_270
    The device has been turned "the unexpected direction."

A Working Application

So let's put this all together in a working application. The source code associated with this article includes a complete Android project consisting of three Activities.
  • The first Activity displays all sensors reported by the SensorManager.
    Touching the name of any sensor gets you to the second Activity.
  • The second Activity displays the details for a particular sensor. It registers to receive updates from the sensor, and displays the resulting values.
    Hitting the menu button while this Activity is showing gives you the option of displaying the final activity.
  • The final activity displays the readings from the sensor you have selected as a vector on the screen. Of course this only makes sense for a device that returns three coordinates, but the program as written doesn't check for that. Expect strange results if you use this option for a single-valued sensor.
Here's what it looks like running on a Samsung Epic[tm]:
Android Screen Shot: List of devices
Screen 1: The first Activity shows a list of sensors returned from SensornManager.getSensorList(Sensor.TYPE_ALL).
Even though using Orientation Sensor as a Sensor object is deprecated, it still shows up on this list.
Android Screen Shot: Accelerometer detail
Screen 2: Selecting SMB380 from the opening screen gets this information about the accelerometer.
It is interesting to note that even when it is setting motionless on a table, the accelerometer reports an acceleration of over 10.3 m/sec2. This is acceleration due to gravity. But wait! Back in physics class we learned that acceleration due to gravity was 9.8 m/sec2. The moral here is that real world sensors (not just the ones built into mobile phones) usually need to be calibrated.
Also worth noting is that this sensor always reports an accuracy of 0 (meaning unreliable.) This accuracy status itself is unreliable! Except for the calibration issue, the accelerator on this device is quite accurate.
Android Screen Shot: Proximity sensor detail
Screen 3: The proximity sensor only returns one value.
The only values returned by the proximity sensor on the Epic are 0.0 and 1.0. Software can tell if there's something close to the screen or not, but it can't really tell how far away it is. The moral of the story is not every sensor fits comfortably in the generic Sensor model supported by Android.
Android Screen Shot: Accelerometer as a vector (Portrait)
Screen 4: Accelerometer values as a vector (portrait mode).
Android Screen Shot: Accelerometer as a vector (Landscape)
Screen 5: Accelerometer values as a vector (landscape mode).
These two screen shots show the readings from the accelerometer displayed as a vector. Because the sensor coordinates are tied to the device, but the screen coordinates change when the display switches to landscape mode, the software has to check the orientation to map vector onto the screen.

Conclusions

Android makes it easy to access the sensors on the mobile device by normalizing the sensor's behavior and data into a common model. There are a few issues, however, that cannot be hidden from the application.
Not all sensors support all of the properties exposed by the Sensor object - for example the Bosch accelerometer shown on the screen shot above does not report how much current it uses.
Also not all devices return the types of data expected by Android. The units for a proximity sensor are expected to be centimeters, but the one shown above provides a yes/no answer to the question, "Is there something close to the screen?"
In spite of these limits, making an application aware of the world around it via the sensors in the mobile device is a relatively easy task that can potentially produce very useful behaviors from the application.

Source Code

The source code for this example application can be downloaded from the OCI Software Engineering Tech Trends web site as a zip file, or a tar.gz file It is a complete Android project that can be built and run in the Android Emulator or installed directly into a device via the USB Debugging port. Because IDEA was used to develop this project, Eclipse users might have do do a little extra work to import this project, but if you are familar with the Android development environment it should be straightforward.
The download files also include Sensor.apk, a pre-built copy of the Sensors application ready to be loaded into your Android phone. If you would like to regenerate this signed application, the password for the digital signature (included in the "assets" directory) is "sensors".
The source code is covered by a liberal BSD-style license. This makes it available for any use, commercial or otherwise, with proper attribution.
If you are interested in moving beyond this simple application to explore the possibilities of harnessing the power of Android for your organization's needs, please contact us to ask about the wide variety of support and training available from OCI.

References


The JNB has a new name! The new "Software Engineering Tech Trends" will continue to cover Java and related technologies but will also address the broader spectrum of relevant technologies available today.

OCI Educational Services

OCI is the leading provider of Object Oriented technology training in the Midwest. More than 3,000 students participated in our training program over the last 12 months. Targeted toward Software Engineers and the development community, our extensive program of over 50 hands-on workshops is delivered to corporations and individuals throughout the U.S. and internationally. OCI's Educational Services include Group Training events and Open Enrollment classes.
For further information regarding OCI's Educational Services programs, please visit our Educational Services section on this site or contact us at training@ociweb.com.

OCI Services

OCI offers real, cost effective, open source support for the JBoss.org software and its suite of associated products. OCI has re-distribution friendly downloads athttp://jboss.ociweb.com/ and provides support on a time and materials basis, (not CPU count.).
Fun People Doing Serious Software Engineering


Sunday, June 5, 2011

http://links.visibli.com/share/TzhYxL

Java Tip 136: Protect Web application control flow

A strategy built on Struts manages duplicate form submission

This article proposes a well-encapsulated solution to this problem: a strategy implemented as an abstract class that leverages the Struts framework.
Note: You can download this article's source code from Resources.

Client vs. server solutions

Different solutions can solve this multiple form submission situation. Some transactional sites simply warn the user to wait for a response after submitting and not to submit twice. More sophisticated solutions involve either client scripting or server programming.
In the client-only strategy, a flag is set on the first submission, and, from then on, the submit button is disabled based on this flag. While appropriate in some situations, this strategy is more or less browser dependent and not as dependable as server solutions.
For a server-based solution, the Synchronizer Token pattern (fromCore J2EE Patterns) can be applied, which requires minimal contribution from the client side. The basic idea is to set a token in a session variable before returning a transactional page to the client. This page carries the token inside a hidden field. Upon submission, request processing first tests for the presence of a valid token in the request parameter by comparing it with the one registered in the session. If the token is valid, processing can continue normally, otherwise an alternate course of action is taken. After testing, the token resets to null to prevent subsequent submissions until a new token is saved in the session, which must be done at the appropriate time based on the desired application flow of control. In other words, the one-time privilege to submit data is given to one specific instance of a view. This Synchronizer Token pattern is used in the Apache Jakarta Project's Struts framework, the popular open source Model-View-Controller implementation.

A synchronized action

Based on the above, the solution appears complete. But an element is missing: how do we specify/implement the alternate course of action when an invalid token is detected. In fact, given the case where the submit button is reclicked, the second request will cause the loss of the first response containing the expected result. The thread that executes the first request still runs, but has no means of providing its response to the browser. Hence, the user may be left with the impression that the transaction did not complete, while in reality, it may have successfully completed.
This tip's proposed strategy builds on the Struts framework to provide a complete solution that prevents duplicate submission and still ensures the display of a response that represents the original request's outcome. The proposed implementation involves the abstract class SynchroAction, which actions can extend to behave in the specified synchronized manner. This class overrides the Action.perform() method and provides an abstract performSynchro() method with the same arguments. The original perform method dispatches control according to the synchronization status, as shown in the listing below:

Android Futures: Creating Android Apps For Google TV


Tutorial Details
  • Technology: Android SDK
  • Difficulty: Beginner
  • Estimated Completion Time: 20 Minutes
Google IO 2011 took place in early May in San Francisco, California. In the midst of many announcements and tons of information, Android development for Google TV did get a little coverage. As of this writing, most developers cannot yet use Google TVs as a target device for development, but this is about to change. Developers looking to get a head start can follow a few easy tips and be ready when consumers can download applications for their TVs.
Existing Google TV devices are scheduled to be upgraded to Android 3.1, a Honeycomb variant, this summer (2011). The Android Market for Google TV will come with this upgrade, making this a hot new Android platform that developers want to prepare for. There are several key differences between Google TV devices and traditional, portable Android devices, such as phones and tablets. Most of the tips involve accounting for these differences.

Preparing Your Application: Screen Density and Resolution

Google TV devices run at two resolutions. The first is 720p (aka “HD”), or 1280×720 pixels. The second is 1080p (aka “Full HD”), or 1920×1080 pixels. These may sound like large numbers, but let’s see how they compare:
  • 1280×720 pixels is actually a lower resolution than existing Honeycomb (Android 3.0) tablets that run at 1280×800.
  • 1920×1080 is actually exactly 4x the number of pixels found on a phone with a qHD (“quarter” HD — makes sense, right?) display, which is 960×540 pixels.
Android tablets are typically used in landscape mode, but can be rotated to a portrait mode. Android phones are generally just the opposite. Televisions, however, are fixed devices and are only landscape-oriented. Unlike tablets or phones, though, televisions are not within arm’s reach. When taking into account their distance from the user and their resolution, Google has come up with a standard definition for the DPI they will be treated as. This is not the physical DPI of the TV screen, but rather an approximation of the perceived DPI of the screen because the user sits at some distance from the screen.
For the 720p screens, the DPI will be considered high, or HDPI. For the 1080p screens, the DPI will be considered extra high, or XHDPI. Both screens are considered to be large, as far as resources go.
When creating graphics and resources, these properties — resolution, density, large size, and landscape orientation — can be combined to narrowly target Google TV devices. Graphics and layouts should be fairly large. The perceived density is fairly realistic. A layout that may look right or slightly over-sized on a tablet might be just right for the TV.
One caveat is to not rely on the exact number of pixels. Televisions work a bit differently than regular screens and not all will actually expose every single pixel. Google TV will adjust the exact resolution so nothing gets cropped from the edges. This means your screen designs should be somewhat flexible to accommodate small adjustments when you’ve had a chance to test your applications on real Google TV devices.

Touching Televisions? Please Don’t

Beyond the physical differences between phones, tablets, and televisions, there is another important difference: televisions don’t have touchscreens. A typical interface to a Google TV device is with a direction pad, or d-pad, — that is, arrow keys for up, down, left, and right along with a select button. Some Google TVs may also have a mouse that is considered to be a “fake touch” input device. This has several implications in terms of input method assumptions when designing and publishing applications.
First, when designing applications for Google TV, keep in mind the navigational limitations of the d-pad: users can’t easily skip over items, there’s no diagonal equivalent, and it doesn’t emulate multitouch navigational aids. For instance, if you interface currently has a row of items with the two most common on the far left and far right for convenient access with thumbs, these two items may be inconveniently separated for the average Google TV user. In other words, pay attention to control navigation order.
Second, without touch there is also no multitouch support. If your application requires multitouch gestures for navigation actions, functions, or other important components, it won’t work with Google TV.
Third, the Android Market uses filters to prevent applications from showing up on certain devices based on explicit items in the manifest file. In addition, several items are implicitly defined. One of these is touch. If you don’t have an entry for touch, the filter will assume that touch is required. When it does this, these applications will not show up on for Google TV. In order for your application to show up, you must set the required attribute of the <uses-feature> entry to false, like this:
<uses-feature android:name="android.hardware.touchscreen" android:required="false" />
This says that your application uses touch, but that it’s not required. That is, your application will function correctly when not on a touch enabled device. You will need to provide smooth alternative functionality for this use-case.

No Native Development Kit, Yet

If your application relies upon the C/C++ libraries and the Android Native Development Kit (NDK), you’re going to run into trouble. For now, there is no NDK support for the Google TV. Keep asking for it, though, because it came up time and time again at Google IO as a developer request, and it’s developer requests that get new features on the roadmap for future releases.

Testing Your Application

As there is no emulator yet available with a true Google TV Android image, we can only test the effects of the higher screen resolution and using the application with a touch screen.
The easiest way to do this is to create a new AVD using Android 3.1, API Level 12, use a resolution of 1920×1080 (or 1280×720), and use a touch screen setting of false. The performance of the emulators may make this difficult, but at least you can get an idea of what the screen will look like and how the navigation or your application will function without touch.

Conclusion

The Android Market and the coming avalanche of applications for Google TV devices is still several months away. However, you can use this time to prepare your applications for these exciting new devices. Targeting Google TV devices is fairly straightforward: simply choose the right version of the Android SDK (Honeycomb) and consider your layouts, graphics and navigational elements carefully. Provide smooth alternative functionality for touchscreen and telephony features, which are unavailable on these devices. You will also be ready to make sure your application will appear in the market for Google TV devices as soon as they reach users’ hands.

About the Authors

Mobile developers Lauren Darcey and Shane Conder have coauthored several books on Android development: an in-depth programming book entitled Android Wireless Application Development and Sams Teach Yourself Android Application Development in 24 Hours. When not writing, they spend their time developing mobile software at their company and providing consulting services. They can be reached at via email to androidwirelessdev+mt@gmail.com, via their blog at androidbook.blogspot.com, and on Twitter@androidwireless.


http://links.visibli.com/share/PnBzVZ