Friday, September 16, 2011

Samsung Galaxy Tab 10.1 review: Droid at large


Tablets are basking in well-deserved attention and manufacturers know they need to try hard and make their devices distinct and memorable. Truly unique gadgets are hard to come by these days - especially in Honeycomb land. Which is perhaps part of the reason why iPad is still the one to beat. The Samsung Galaxy Tab 10.1 3G is in for a challenge, and up for it.
Samsung Galaxy Tab 10.1 and Tab 8.9 Samsung Galaxy Tab 10.1 and Tab 8.9 Samsung Galaxy Tab 10.1 and Tab 8.9 Samsung Galaxy Tab 10.1 and Tab 8.9
Samsung Galaxy Tab 10.1 official photos
Shortly after launch the Galaxy Tab 10.1 was blessed with a custom user experience, called Touch Wiz UX, which literally puts more color into Honeycomb, offers a good selection of customizable widgets and most importantly tries to ease your way into Android for tablets.

Yet this tablet’s main advantage remains that it’s the most portable 10” slate to hit the market. It's thinner even than iPad 2 and good 42 grams lighter than Apple's frontrunner, while still promising to match its battery performance. And that's no mean feat since tablets are going hard after netbooks, so they need to back their portability with battery longevity.
The Galaxy Tab 10.1 has a dual-core NVIDIA Tegra 2 processor, a bright 10.1" PLS TFT display of WXGA resolution, a premium set of connectivity options and plenty of storage space. Check out the full list of things going for (and against) the Galaxy Tab 10.1 3G below.

Key features

  • 10.1" 16M-color PLS TFT capacitive touchscreen of WXGA (1280 x 800 pixels) resolution
  • Very lightweight at just 565 g
  • Thinnest slate to date at just 8.6 mm
  • Gorilla Glass display
  • Tegra 2 chipset: Dual-core 1GHz ARM Cortex-A9 processor; 1GB of RAM; ULP GeForce GPU
  • Android 3.1 Honeycomb with TouchWiz UX UI
  • Optional quad-band GPRS/EDGE and tri-band 3G with HSDPA 21 Mbps connectivity
  • 16/32/64 GBGB of built-in memory
  • 3.2 MP autofocus camera, 2048x1536 pixels, LED flash, geotagging
  • 2.0 MP front-facing camera; video calls
  • 720p HD video recording @ 30 fps
  • Wi-Fi 802.11 a/b/g/n Wi-Fi Direct, dual-band, Wi-Fi hotspot
  • Proprietary 30-pin connector port for charging
  • Stereo Bluetooth v3.0
  • HDMI TV-out (adapter required), USB host (adapter required)
  • Standard 3.5 mm audio jack
  • Flash 10.3 support
  • GPS with A-GPS support; digital compass
  • DivX/XviD support (fullHD), MP4 support up to HD
  • Accelerometer and proximity sensor; three-axis Gyroscope sensor
  • Polaris office document editor comes preinstalled
  • 7000 mAh Li-Po rechargeable battery

Main disadvantages

  • Non-replaceable battery
  • No microSD card support
  • No standard USB port
  • No Android Honeycomb 3.2 yet
  • No GSM voice capabilities despite the available SIM slot

Samsung Galaxy Tab 10.1 at ours
Samsung are bringing their A game in the Galaxy Tab 10.1. Not that it should be judged by sheer size but the company's biggest tablet is fit to be in charge and meet the competition head on. Whether it's watching films, browsing the web, gaming, video-calls, or imaging, this is one of the best-equipped tablets out there.
The screen quality, the added TouchWiz UX functionality, the good battery and excellent media make it a must-see. The whole package looks like the right mix of style and substance, but we just won't rush to a verdict. The Galaxy Tab had a promising start in our preview. With all the finishing touches in place, it's ready to give its best. Head on past the break to see what the Galaxy Tab 10.1 is made of.

Windows 8

ANAHEIM, Calif.--Microsoft, in revealing details of its upcoming Windows 8 operating system this week at its Build developer conference here, has presented its vision for computing in a tablet era that's starkly different from the one offered by rival Apple.
Microsoft Windows president Stephen Sinofsky introduces a test version of a touch-enabled Windows 8.
(Credit: Microsoft )
Apple believes that consumers will want discreet devices that are designed to take on specific tasks. That's why its computers run a beefy operating system designed to handle heavy-duty computer processing required, for example, by computer-assisted design applications, and its iPads run a much lighter-weight operating system that's fine for surfing the Web or reading a digital book.
That's not the vision Microsoft's pursuing. The software giant believes consumers will want a meaty operating system that can run on a variety of devices--everything from a slim tablet up to water-cooled high-end gaming system. Not surprisingly, that operating system is Windows.
"Their approach is to take the PC OS and bring it to the tablet which is opposite of what Apple is doing," said Jason Maynard, an analyst with Wells Fargo Securities. "Sometimes when you have a hammer, everything looks like nail."
Maynard doesn't think that Microsoft's approach is without merit. The company is simply playing to its strengths. After all, Windows runs more than 1 billions PCs worldwide. And when Windows 8 arrives, most likely late next year, it will ship on as many as 400 million PCs, according to some analyst estimates. At the Build conference, Microsoft harped on the potential market to developers in hopes of convincing them to create new applications for Windows 8.
"The opportunity for building these applications is Windows. These applications will run on all new Windows 8 PCs, desktop, laptop, Windows tablets, small, big screens, all-in-ones--every Windows PC, whether it's a new PC or an upgrade from Windows 7," Windows President Steven Sinofsky told the 5,000 developers gathered for his keynote address at the conference on Tuesday. "That could be 400 million people when this product launches. That's a market opportunity for all of you."
The challenge, though, will be convincing developers to create slick applications that take advantage of the touch-enabled Metro interface of Windows 8. And it's likely that they'll only do that if they believe hardware makers will come up with compelling designs that encourage users to use the new operating system as a tablet and not just the PCs that Windows has traditionally run.
That's why Microsoft has been working with hardware makers to insure that Windows 8 can run on ARM chips. The ARM system-on-a-chip architecture means that devices themselves can be thinner and lighter. That should open the door to some slim and attractive tablets running the operating system. But those ARM chips won't be able to run some legacy Windows applications unless programmers go through the bother of porting those applications.
That means that ARM tablets running Windows 8 won't have complete backward-compatible functionality. And it removes some of the advantage that running Windows brings to a tablet.
Those legacy applications will be able to run on Windows 8 computers using the x86 architecture with chips from Intel and AMD. But that architecture requires more hardware components, meaning the devices themselves may wind up being thicker and heavier. That's fine for slipping into a dock to handle traditional workplace computing tasks such as crafting a presentation. But those bulkier devices aren't particularly comfortable to sit back and read a book on.
To be fair, it's still early. Microsoft and its partners have at least a year to work out the kinks before Windows 8 and the variety of devices on which it will ship debut. And they recognize the challenges.
Qualcomm is one of the key Microsoft partners, working to optimize its ARM-based Snapdragon chip for Windows 8. It's also working hard to help developers figure out how to port legacy applications to the new platform, though it realizes it can't tackle every one.
"Our focus is going to be on the top applications that address the top 90 percent or so of users," said Steve Horton, director of software in the chipset division at Qualcomm. "But if you're using Quicken 99, you may be stuck."
Similarly, AMD is pushing hard to help its partners create ever thinner devices that can handle the broadest swath of applications.
"There is definitely the opportunity for thinner, lighter devices" running x86 chips, said Gabe Gravning, senior manager for client product marketing at AMD. "The market is moving in that direction."
That's Microsoft's big bet with Windows 8. Microsoft sees Windows as the Swiss Army knife that can meet everyone's computing needs. It's got to hope that the prevailing market wisdom of Apple, providing specific devices running different operating systems tailored for discrete purposes, will prove flawed.

Read more:

Monday, June 6, 2011



Esther | Tuesday, May 31st, 2011 | 3 Comments »
(this article originally appeared on
Agile project management depends on transparency and feedback. Visibility into the product and process is built in with iteration reviews and retrospectives. Task walls and Kanban boards make progress (or lack of it) and bottlenecks obvious. Stand-up meetings seek to raise impediments to management attention. But are managers ready to hear about these problems?
If organizations want to realize the benefits of agile methods, managers need to act on the problems that bubble up from teams, deal with unexpected events on projects and proactively find and fix problems that derail projects. Unfortunately, many managers behave in ways that communicate they aren’t interested in solving problems–and ensure they won’t learn of problems until it’s too late.

“Don’t bring me problems, bring me solutions.”

I suspect that managers who repeat this sentence believe it will encourage people and teams to solve problems on their own. But people don’t approach their managers with problems they know how to fix and can solve (or believe they can solve). The problems they raise are ones they don’t know how to solve, don’t have the organizational influence to solve or need some help to solve.What really happens when managers talk this way? Team members struggle in isolation or ignore problems, hoping they will go away. Managers who tell people not to bring them problems ensure that they won’t hear about small problems that are part of a larger pattern.

“Failure is not an option!”

Managers who rely on this exhortation ensure they won’t hear about risks and issues. The phrase sends the message that success is a matter of character and will rather than the result of planning, observation, re-planning and course correction when something unexpected occurs.
Will and character are assets in any endeavor; however, they are not sufficient for success. Success requires removing impediments and proactively finding and ameliorating problem situations. Failure may not be an option that managers like, but it is always a possibility; ignoring that fact forces problems underground and makes failure more likely.
“The thought that disaster is impossible often leads to an unthinkable disaster.”- Gerald M. Weinberg

“Get with the program or get off the bus!”

When managers give the impression that their minds are already made up, subordinates are less likely to bring up weaknesses, problems or alternatives. People fear that their concerns won’t be heard. Worse, they fear (often with reason) being labeled as naysayers or whiners. Discourage people from shining the light on problems and they’ll stop.
But managers don’t need to be obvious in their discouragement; more subtle actions can also plug the pipe.


Interrupting a person who brings unwelcome news makes it harder for that person, who is already facing a difficult conversation. People interrupt for many reasons–excitement, the desire for more details, etc. But to the person being interrupted, a stream of interruptions can feel like an interrogation. Interrupting implies impatience–and that anything the interrupter has to say is more important than what the other person was about to say.

Ignoring Intuition

A couple of years ago, a friend felt uneasy about an action his manager was taking. He couldn’t quite put his finger on why he felt concerned, but his feeling strong enough that he went to his manager–who dismissed his intuition, telling him, “Come back when you have some facts and we can have a logical argument.” But the situation outpaced data gathering and blew up.
Asking for excessive proof and demanding data ensures that a whole class of complex and systemic problems won’t come to attention early.

Non-verbal cues

I coached a manager who furrowed her brow and tapped her pencil when people told her about problems. She was thinking hard. They thought she was irritated with them for brining bad news.When there’s a problem on a project, the earlier you know about it, the more options you have to mitigate the impact, make a course correction or re-set expectations. But you won’t hear about problems if you plug the information pipeline.
“The problem isn’t the problem. Coping with the problem is the problem.” – Virginia Satir
As much as we might wish there were no problems on projects, that’s not the way the world works. Problems are a normal part of life. Managers need to know about problems so they can see patterns, find options and steer projects.
Here are three things you can do to make sure your information pipeline flows:
Tell people you want to hear about problems. Sounds simple–and it is. Assure people that you understand that nothing goes exactly as planned and you don’t expect perfection. You may not want every problem dropped at your doorstep to solve–but if you act as if having problems is a problem, you won’t learn about impediments and issues when they are small.
Learn how to listen. At a recent talk, a participant asserted that people from [fill a non-western country here] don’t know how to say “no.’” This is not true. What is true is that many Americans don’t hear it when people from different cultures say “no.” The same is true for hearing about problems. If you want to build an early warning information system, you need to learn how to listen. That means refraining from interruptions. It also means listening for less obvious cues and what isn’t being said. When there’s a long hesitation preceding a positive statement, there’s more to learn. If you don’t hear any mention of risks or issues, delve deeper.
Teach people how to speak up. I don’t want to clog the information pipeline by implying that I only want to hear about problems that have ready solutions:
The most important and dangerous problems don’t have an obvious fix.Here’s a framework that has worked for me. It provides useful information and an agreed-upon format that reduces the psychological barriers to raising issues:
“Here’s my hunch…” This makes it explicit that I don’t require excessive proof.
“Here’s why you need to know about it…” This signals that I recognize that I don’t know everything.
“Here’s my data…” If there is data, it’s good to know. And I’ve heard about intuition being born out enough that “I have a bad feeling about this” is good enough for me.
“Here’s what I’ve considered or tried…” I do want people to think about the issue and I want to hear about their thinking. Problem solving is improved by multiple points of view.
Standard agile practices such as visible charts, frequent demonstration of working product, and retrospectives are all ways to make both progress and problems visible. But if people don’t feel safe to bring up issues, you won’t hear about them until it’s too late. If you take the actions outlined here, it will be easier for people to bring up problems to you. Problems are part of life–and projects. Pretending otherwise creates a culture for them to hide and fester.

Sensing the World from an Android[tm] Device by Dale Wilson, Principal Software Engineer Object Computing, Inc. (OCI)

Sensing the World from an Android[tm] Device

Dale Wilson, Principal Software Engineer
Object Computing, Inc. (OCI)


Calling it a mobile phone is like the blind man finding the elephant's tail and thinking he's found a snake. Sure it fits in your shirt pocket, and it will let you talk to you grandmother. But that just the beginning of what this device knows how to do. It knows which way is up. It knows which way is north. It knows where it is on the planet. It knows how high it is above sea level. It knows how bright the light is around it. It knows if there is anything close to its screen. It knows how the birds can get revenge on the pigs who stole their eggs. It is truly amazing.
To know what's going on in the world around it, the mobile device has sensors. Programming is needed to set those sensors up properly and accept the data they produce. That's the topic for this article -- how to write programs that use those sensors. To be more specific, this article focuses on Android-based devices. Equivalent articles could be written about iPhone[tm]Windows 7 Mobile[tm]WebOS[tm], or BlackBerry[tm] devices, but that will have to wait for another time.

Development Environment

If you are writing a program for an Android device you are probably working in Java, and you almost certainly have downloaded and installed the free Android Software Development Kit. After that you need to understand the Android architecture. There are many ways to acquire that knowledge including taking OCI's Android Platform Development Course. This article will assume you have a basic understanding of Android programming.
At the time of this writing, there are two free IDEs in common use to develop Android applications, Eclipse and Jetbrains' IDEA Community Edition. Both of them require freely available Android plug-ins to support Android development. The example program for this article was developed using IDEA.

Getting Started -- Finding a SensorManager

In Android any access to a sensor starts by finding the system-wide SensorManager. This code assumes you are running in the context of an Android Activity. You need to import the Android support for sensors using the following statements:
import android.hardware.Sensor;
    import android.hardware.SensorEvent;
    import android.hardware.SensorEventListener;
    import android.hardware.SensorManager;
Now the following code will find the SensorManager:
SensorManager sensorManager =

Finding a Sensor

Every model of Android device will have a different set of sensors. Before using any of the sensors described here, you need to check to be sure the sensor is available. You can use the SensorManager to discover what sensors are available on this device or to find a particular sensor. There are two methods available to help:
  • SensorManager.getSensorList(type_specifier) will return a list of all the sensors of a particular type.
  • SensorManager.getDefaultSensor(type_specifier) will return a single sensor - the default sensor for a particular type.
    Be sure to check for a null return! It means no sensor of the requested type is available.
The type_specifer is one of the constants defined in the Sensor class. From now on, I'll refer to this as Sensor.TYPE but this is not a class or an enumerated value. It is just a set of integer constants with similar names.
Possible Sensor.TYPEs include:
When you are using the getSensorList() method, these values may be combined with an OR operator to include more than one type of sensor. There is also a special Sensor.TYPE, TYPE_ALL, that when used with getSensorList() will return a list of all the sensors on the mobile device. We will use that in our example program. For normal use, however, you probably want to call getDefaultSensor() with a specific type.
As an example, let's suppose you are writing a compass application. You want to use the magnetic field sensor in your phone to determine which way is north. To gain access to that sensor, use the following code:
Sensor magMeter =
There are some interesting points here. First of all there is no special Java class for the different types of sensors. The class Sensor provides a generic interface that is assumed to be flexible enough to support the requirements of any of the sensor types on the device.
The second point of interest is not obvious. There is a sensor type which is not included in the above list, TYPE_ORIENTATION. It is omitted from this list even though it's defined in the Android source code and documentation because Android has deprecated this Sensor.TYPE. Instead the SensorManager provides specialized support for orientation sensors. This orientation support is described later in this article.
Other features of the device such as the camera or gobal position sensor are supported through different APIs. They will have to wait for another article.

Using a Sensor

So now that we have a Sensor, what can we do with it? The surprising answer is, "Not much." The Sensor object itself serves two purposes.
It provides information about the sensor: Who makes it? How much power does it consume? How accurate is it?
It serves as a handle or identifier for the sensor if you want to talk about it to other parts of the system.
Notably missing from the sensor's interface is any way to read values from the sensor! As we'll see in a minute, that job is handled by the SensorManager.
Also missing is information about how many values the sensor provides. Does it give a single value or a 3-D vector? What units of measure does it use? And so on. For that information you need to go to the Android documentation. The SensorEvent page in particular tells you for each Sensor.TYPE how many and what types of values you can expect.

Reading Data from a Sensor

Assuming we don't really care who makes the sensor or how much power it consumes, but that we are interested in the values provided by the sensor, what's the next step? To read values from the sensor, we have to create an object that implements the SensorEventListener interface. [Aside: there is also an earlier, deprecated, interface named SensorListener - ignore it!]
We can then register our SensorEventListener and the Sensor object to the SensorManager. This does two things. It enables the sensor if it was turned off, and it provides a call-back function that the SensorManager can use when a new value is available from the sensor.
Here is the code that creates a SensorEventListener and provides implementations for the two abstract methods in the interface. These implementations just forward the call to corresponding methods in the containing object.
SensorEventListener magneticEventListener =
     new SensorEventListener() {
        public void onSensorChanged(SensorEvent sensorEvent) {
            // call a method in containing class

        public void onAccuracyChanged(
            Sensor sensor, int accuracy) {
            // call a method in containing class
            magneticFieldAccuracyChanged(sensor, accuracy);
Having created a SensorEventListener, the program should register it with the SensorManager using code like this:
sensorManager.registerListener(magneticEventListener, magMeter, SensorManager.SENSOR_DELAY_NORMAL);
This SensorEventListener will now receive events from the magMeter Sensor acquired earlier.
The third argument to SensorManager.registerListener() is a suggestion about how often the application would like to receive new values. Possible delay values from fastest to slowest are:
Faster speeds cause more overhead, but make the device more responsive to changes in the values detected by the sensor.

Housekeeping and Good Citizenship

Just as important as registering a SensorEventListener and enabling the sensor is disabling the sensor and unregistering the listener when it is no longer needed. Registering and unregistering should be handled as "bookends" so if you add the above code in your Activity's onResume() method (a good place for it) be sure to add this code to the onPause() method.
sensorManager.unregisterListener(magneticEventListener, magMeter);
That will ensure that the device is turned off -- prolonging battery life. Even though most sensors can be shared, unregistering the listener when it is no longer needed will also make sure the sensor is available to other applications that may run on the device.

Handling Sensor Accuracy

Notice that there are two callback methods defined in SensorEventListener: onSensorChanged() and onAccuracyChanged(). We will discuss onAccuracyChanged()first.
As the name implies, this callback occurs when something has increased or decreased the expected accuracy of the values produced by this sensor. The integer argument will be one of the following values - in order from least to most accurate:
  • SENSOR_STATUS_UNRELIABLE means that the values cannot be trusted. Something is preventing the sensor from acquiring accurate readings, so any reported values are just wrong.
  • SENSOR_STATUS_ACCURACY_LOW means that the values are correct, but not very accurate.
  • SENSOR_STATUS_ACCURACY_MEDIUM means the values are fairly accurate, but they are not the best that this sensor is capable of under ideal conditions.
  • SENSOR_STATUS_ACCURACY_HIGH means the sensor is producing the best values it can produce under excellent operating conditions.
Unfortunately, there seem to be some devices that always report SENSOR_STATUS_UNRELIABLE and others that always report SENSOR_STATUS_ACCURACY_HIGH. Don't place too much confidence in the accuracy status.

Using the Sensor Data

Finally, we are ready to discuss the interesting callback -- onSensorChanged(). The argument passed when this method is called is a SensorEvent structure. This structure contains real data from the sensor. Let's see what we've got.
I described SensorEvent as a structure. Technically it's a Java class, but this class does not have any useful methods - only public data members (fields).
The first field is one we've already seen: int accuracy will contain one of the same values as the argument to onAccuracyChanged() method. Thus for each sample of data from the sensor you know how accurate you can expect the data to be. For practical purposes you might be able to ignore the onAccuracyChanged()notice altogether and just use this value from the SensorEvent although you still must implement the abstract onAccuracyChanged() method.
The next field is Sensor sensor. This is the same Sensor that we used to register this callback. It is included in case we have common code handling the events from more than one Sensor.
The third field is long timestamp. It tells us when this event occurred. A timestamp in Android has a resolution in nanoseconds. It is based on the most precise timer on the device, but it is not tied to any particular real world clock. This timestamp can be used to calculate the interval between events, but not to determine the time of day (or month or year) when the event occurred.
The last field in the SensorEvent is float[] values. Yes, these are the values we are looking for. Most sensors will produce either one or three values in this array. The array in the SensorEvent is of a fixed size. It usually contains three floats even if the sensor produces fewer numbers. Be careful. Sometimes this array will have a size different from three.
The best approach is to use the Sensor.TYPE available via sensor.getType() to determine how many values are valid. The Sensor.TYPE also determines what units of measurement apply to this sensor. Fortunately Android has normalized the incoming sensor values so all sensors of the same type produce the same number of values using the same units.
Of course if you know what type of sensor you are working with you may not even need to check sensor.getType(). You can just write your code to handle the values you know you will receive.


Many of the sensors provide a three dimensional vector for the measured value. They provide value for the x-axis, the y-axis, and the z-axis as values[0], values[1], and values[2] respectively. Now all you need to know is the relationship of these axes to the actual device.
To simplify matters, all devices use the same axes albeit with different units. The axes are firmly attached to the device. If you move the device, the coordinate axes move right along with it.
Every device has a natural orientation. For most phones the natural orientation is portrait (taller than it is wide). For most tablets, on the other hand, the natural orientation is landscape (wider than taller). The axes for the device are based on this natural orientation.
The origin of the axes -- point (0,0,0) -- is in the center of the device's screen.
If you hold the device vertically in it's natural orientation the x axis runs left to right across the center of the screen. Positive values are to your right and negative points are to your left.
The y axis runs up and down in natural orientation. Positive values are up and negative values are down.
When you stare straight at the screen you are looking along the z axis. Negative values are behind the screen and positive values are in front.


As mentioned previously, using the results from the orientation sensor directly through the SensorManager interface has been deprecated in Android. Instead, there is a different way to determine the orientation of the device.
What you want to know is not really how the device is being held, but rather what screen orientation Android is using. Interpreting the values received from the orientation sensor is only a small part of the puzzle. There are techniques an application can use to lock the screen into a particular orientation or to change orientations under program control regardless of the way the device is actually being held.
For the program presented in this article, we want to display the sensor data visually on the screen. In order to do so, the coordinates returned by the sensors have to be mapped into the coordinates used to draw on the screen.
The 2-D drawing coordinates are relative to the upper right corner. This means the Y values on the screen increase from top to bottom, but the Y values from the sensor increase from bottom to top. To reconcile sensor coordinates to drawing coordinates the Y values must be negated.
After this correction, the coordinates need to be rotated around the Z axis. Because the only orientations involve some number of ninety degree rotations, this can always be done by various combinations of swapping and/or negating X and Y coordinates.
Finally, the coordinates have to be scaled properly to match the size of the screen. There are a number of techniques for doing that including using the coordinate transformation matrix built into the Android View (which could handle the orientation-mapping, too), but the details are beyond the scope of this article. See the source code for one way to scale the coordinates.
But before any of this rotation can happen, the software needs to know the screen orientation Here's the code to find that out:
Display display = ((WindowManager) getContext().getSystemService(Context.WINDOW_SERVICE)).getDefaultDisplay();
    int orientation = display.getOrientation();
At this point the variable orientation contains one of the following values:
  • Surface.ROTATION_0
    This is the natural orientation of the device. Notice that for a phone this will be portrait mode, but for a tablet it will be landscape mode.
  • Surface.ROTATION_90
    This is the normal landscape mode for a phone, or portrait mode for a tablet.
  • Surface.ROTATION_180
    The device is upside down.
  • Surface.ROTATION_270
    The device has been turned "the unexpected direction."

A Working Application

So let's put this all together in a working application. The source code associated with this article includes a complete Android project consisting of three Activities.
  • The first Activity displays all sensors reported by the SensorManager.
    Touching the name of any sensor gets you to the second Activity.
  • The second Activity displays the details for a particular sensor. It registers to receive updates from the sensor, and displays the resulting values.
    Hitting the menu button while this Activity is showing gives you the option of displaying the final activity.
  • The final activity displays the readings from the sensor you have selected as a vector on the screen. Of course this only makes sense for a device that returns three coordinates, but the program as written doesn't check for that. Expect strange results if you use this option for a single-valued sensor.
Here's what it looks like running on a Samsung Epic[tm]:
Android Screen Shot: List of devices
Screen 1: The first Activity shows a list of sensors returned from SensornManager.getSensorList(Sensor.TYPE_ALL).
Even though using Orientation Sensor as a Sensor object is deprecated, it still shows up on this list.
Android Screen Shot: Accelerometer detail
Screen 2: Selecting SMB380 from the opening screen gets this information about the accelerometer.
It is interesting to note that even when it is setting motionless on a table, the accelerometer reports an acceleration of over 10.3 m/sec2. This is acceleration due to gravity. But wait! Back in physics class we learned that acceleration due to gravity was 9.8 m/sec2. The moral here is that real world sensors (not just the ones built into mobile phones) usually need to be calibrated.
Also worth noting is that this sensor always reports an accuracy of 0 (meaning unreliable.) This accuracy status itself is unreliable! Except for the calibration issue, the accelerator on this device is quite accurate.
Android Screen Shot: Proximity sensor detail
Screen 3: The proximity sensor only returns one value.
The only values returned by the proximity sensor on the Epic are 0.0 and 1.0. Software can tell if there's something close to the screen or not, but it can't really tell how far away it is. The moral of the story is not every sensor fits comfortably in the generic Sensor model supported by Android.
Android Screen Shot: Accelerometer as a vector (Portrait)
Screen 4: Accelerometer values as a vector (portrait mode).
Android Screen Shot: Accelerometer as a vector (Landscape)
Screen 5: Accelerometer values as a vector (landscape mode).
These two screen shots show the readings from the accelerometer displayed as a vector. Because the sensor coordinates are tied to the device, but the screen coordinates change when the display switches to landscape mode, the software has to check the orientation to map vector onto the screen.


Android makes it easy to access the sensors on the mobile device by normalizing the sensor's behavior and data into a common model. There are a few issues, however, that cannot be hidden from the application.
Not all sensors support all of the properties exposed by the Sensor object - for example the Bosch accelerometer shown on the screen shot above does not report how much current it uses.
Also not all devices return the types of data expected by Android. The units for a proximity sensor are expected to be centimeters, but the one shown above provides a yes/no answer to the question, "Is there something close to the screen?"
In spite of these limits, making an application aware of the world around it via the sensors in the mobile device is a relatively easy task that can potentially produce very useful behaviors from the application.

Source Code

The source code for this example application can be downloaded from the OCI Software Engineering Tech Trends web site as a zip file, or a tar.gz file It is a complete Android project that can be built and run in the Android Emulator or installed directly into a device via the USB Debugging port. Because IDEA was used to develop this project, Eclipse users might have do do a little extra work to import this project, but if you are familar with the Android development environment it should be straightforward.
The download files also include Sensor.apk, a pre-built copy of the Sensors application ready to be loaded into your Android phone. If you would like to regenerate this signed application, the password for the digital signature (included in the "assets" directory) is "sensors".
The source code is covered by a liberal BSD-style license. This makes it available for any use, commercial or otherwise, with proper attribution.
If you are interested in moving beyond this simple application to explore the possibilities of harnessing the power of Android for your organization's needs, please contact us to ask about the wide variety of support and training available from OCI.


The JNB has a new name! The new "Software Engineering Tech Trends" will continue to cover Java and related technologies but will also address the broader spectrum of relevant technologies available today.

OCI Educational Services

OCI is the leading provider of Object Oriented technology training in the Midwest. More than 3,000 students participated in our training program over the last 12 months. Targeted toward Software Engineers and the development community, our extensive program of over 50 hands-on workshops is delivered to corporations and individuals throughout the U.S. and internationally. OCI's Educational Services include Group Training events and Open Enrollment classes.
For further information regarding OCI's Educational Services programs, please visit our Educational Services section on this site or contact us at

OCI Services

OCI offers real, cost effective, open source support for the software and its suite of associated products. OCI has re-distribution friendly downloads at and provides support on a time and materials basis, (not CPU count.).
Fun People Doing Serious Software Engineering