Effective UI development with GUI tools for embedded devices
A good Graphical User Interface (GUI) can thrill customers, and a good reusable GUI development platform can save developers time and money; however, design subtleties and pitfalls keep developers on their toes.
A good User Interface (UI) is priceless. Think of the clean iTunes interface. The meaning of “UI” is quickly becoming something other than “User Interface”; it can now stand for “Unbelievably Important” and an “Untapped Investment” in embedded systems.
As microcontrollers move up the ladder in capability and out into the world of the consumer, and processors have moved from low-capacity 8-bit to high-capacity 32-bit systems, human interaction with software becomes more important. Along the way, programmers wondered, “What do I do with this horsepower?” Now they’re in a position to use this power to respond to customer requirements that say, “I want this to look like an Android or iPad app.”
However, the amount of effort involved in creating an effective UI is non-trivial. Graphical User Interface (GUI) development takes at least three basic steps: designing a visual interface (windows and widgets), writing the code that implements that interface, and getting that code to work on the specified hardware. Additionally, as processor power has grown, so have customer expectations. Color, touchscreens, gesture recognition, and speech recognition continue to up the ante in development efforts and create even greater challenges. Tools like GUI software packages are available to help automate the basics, but programmers still have a lot of work to do to create a polished interactive interface where each step of development has its own challenges and remedies.
Designing and building the interface
The old-fashioned, and still valuable, way to build a visual interface is pencil and graph paper. In essence this step involves creating “storyboards” and mapping out the interface.
The updated version of those storyboards is a good GUI package that includes an interface builder: a way for the developer to define a window’s complete layout with adornments, scroll bars, text areas, buttons, widgets, colors, text, and so on. This allows items to be positioned accurately with correctly configured behavior and styles.
There should be transparent objects that can act as containers to group certain objects together, such as a collection of radio buttons that respond to a single message. The library will have a message-delivery system that can deliver the message to the objects in the group. All of the required properties and connections among the controls in the window can be set up using a container hierarchy.
There is a critical subtlety here to be aware of. Typically the layout utility is a desktop tool that ultimately generates code for an embedded platform. It is important for the interface to look and behave precisely the same on both the host development platform and the platform being developed. The window builder should enable a What-You-See-Is-What-You-Get (WYSIWYG) design. The same runtime library rendering the image on the desktop should render the image on the destination device. The OS shouldn’t, for example, render a font one way on the desktop OS and appear differently on the end product. Pixel-for-pixel spacing on a small output device can matter a great deal.
Creating the code
Once settled on a design, some libraries make developers write code from scratch. That’s not the ideal solution. A good GUI package will generate all the code and configuration files required to create the interface. The generated code should be compiler-neutral, typically standard ANSI C or C++.
A good bit of the UI’s basic behavior code can be generated as well. For example, a collection of radio buttons has the standard behavior of all turning off except the one selected. That update behavior can be coded automatically. This is not rocket science; programmers spending time writing basic functionality wastes their true value.
The functionality behind that interface is up to the developer. Here the library can be a great help. This code should be well documented, showing developers where to add (and not add) code to the program. The automatically generated code should have function stubs with statements equivalent to // put your code here. The designers of the library know what kind of code belongs where and should provide a great deal of guidance. Look for that kind of help in good GUI tools.
Finally, in addition to the actual programming code, consider UI text. Fonts in foreign languages and non-Roman scripts like Mandarin, Kanji, or Arabic are an important consideration. Look carefully for font support in a tool. How does text get into the UI, and how easy is it to update and modify that text? What happens when a client says, “We’re going after the market in India; we need to have the software localized into Hindi”? Developers need to be able to change the language independently of the code for smooth system transitions.
Running on hardware
Once the interface is fully designed and coded, it has to work. It should be compatible with multiple processors, multiple display screens with different display technologies, physical dimensions, and color depths. It needs to be independent of hardware assumptions and dependencies so migration to new platforms and the addition/removal of components will not require substantial recoding or redesign. Input mechanisms should also be independent – mouse, stylus, capacitive touch, resistive touch, and so forth. The library should be software independent as well, and should work with a choice of operating systems, drivers, and other software packages.
To work on a hardware platform, no library can stand alone. It must have a runtime library that sits on the hardware and does work like render images and fonts. There are also hardware drivers for input and output devices. As an example, take a look at the block diagram for the PEG library from Freescale (Figure 1).
Moving a GUI to a new platform is still work, but a compartmentalized design reduces the work to a minimum. If input is changed from mouse to gesture-based resistive touch, the design is going to need a new input driver. However, a well-factored GUI should require very few if any changes to the design and code. In a well-factored design, the GUI calls a routine to get an XY coordinate instead of calling a mouse driver. The mouse will feed that coordinate into the input layer, isolating the GUI from the hardware. Then if the mouse changes to a stylus or a touch screen, the GUI code doesn’t change at all; each new driver feeds its data to the right place.
But wait, there’s more. Recall the importance of WYSIWYG between the desktop designer and the embedded platform: This isn’t just for quality control and testing – there is another significant benefit from that. It’s possible to build a functional prototype on the desktop without having the actual physical device complete and in hand; the application can be distributed to key stakeholders in the beginning of the development cycle and get buy-in without developing hardware. Then after the device really exists and when the specs come in for the next-generation device that needs to hit the market in three weeks, developers will be as prepared as they can be.
Focus on implementation, not code
Using GUI tools can help lock down the basics and give developers more time to focus on making sure an interface’s logic is sound and the interaction is intuitive. Because it is relatively easy to create a working mockup of a UI, developers can test usability before or in parallel with application functionality. End users will thank UI designers when they don’t have to figure out what to choose for messages like the one in Figure 2.
Jump off the shoulders of giants
Creating a UI for the first time is revolutionary. Modifying and leveraging it in future products is evolutionary. Creating a new UI for every new piece of software, or even writing code to create windows and widgets isn’t a smart use of time. Very bright people solved these problems a good while back, and the intelligent thing to do here is to reuse known good practices – a simple UI consistent across and independent of different platforms – with the help of a good GUI engine. The proper choice of GUI tools allows UI code to be future-proofed (as much as possible) during development, reducing time and support burdens during its lifetime.
Freescale Semiconductor Jim.Trudeau@freescale.com Roger.Edgar@freescale.com www.freescale.com