Posted by alexeyp
on March 1, 2007 at 11:06 AM PST
New feature of ME Framework 1.2 solves some of problems related to the debugging of Java ME test suites.
As a follow up to the past article about Debugging with ME Framework, here is the the guest post from Alexander Alexeev (aka Skavas) on the new feature
he has integrated into the ME Framework ,
the Interactive MIDlet agent. The feature addresses some usability
issues of executing large test suites on mobile devices, provides
on-screen indication of the testing progress and allows to perform some
operations with test results on the device.
article described the approaches, used for execution of large test
suites for Java TM ME implementations as well as some of techniques,
allowing to optimize test execution time. To restate the main
points, relevant to this debugging topic:
- test execution is managed at the server side
- the test execution process consists of sequential
downloads of test MIDlet suites to the device
- during this process AMS and MIDlet suites
exchange control messages with test harness
- one of optimizations, allowing to minimize network traffic
and number of downloads/installations/runs/removals, is to package
multiple tests into a single bundle.
The diagram describing this autotest
cycle can be seen here .
The 'autotest' approach allows to achieve the
sufficient level of automation, that is one of high priority requirements
for test suites we develop with JT harness
and ME Framework.
User interactivity here still may be desired in few situations:
- 'sendTestResult' may not be executed for some reason. For
example, because of the VM exit or test/VM hang up. Since this
operation is executed at the Test level, and Test may consist of
multiple test cases, all information from actually executed test cases
may be lost. It will be unknown which test case caused the problem.
- when the device is slow and optimization is used (multiple
tests in bundle), the whole test cycle will be shorter, but every
individual test bundle will take time to execute. Without visual
indication it may be hard to distinguish if tests are being executed
that slow, or it is just device stopped responding an hour
- if test hung, it should be possible to cancel it
with minimal impact on the execution process,
without stop/restart the whole test run.
One solution for these debugging problems is using the Test
run tests isolated from the test harness environment. Test results
from standalone execution are not sent to the JT harness, they are
on the device console, if there is such available.
Interactivity for Automated Tests
The proposed solution for all above problems is to introduce
interactivity at the device side. We added an option to use Interactive
MIDlet Agent for test execution, that provides the following features:
- shows the current status on the device display
- continuously saves
the current status of the RMS
- provides a control for saving the current status of
- allows viewing results stored in the RMS
- provides a control for canceling the
current test/test bundle and starting the next test/test bundle
These interactive features are also available in the Test Export mode.
Additional set of standard configuration questions were added to the ME
Framework Configuration Editor. To make ME Framework to use this
on screen tracing functionality, see the screenshot
When the JT harness executes tests with these configuration
the user interface on the device side displays the following
information and commands:
- Information string with a common count of tests done, a
count of failed tests, and a count of passed tests
- Information about the current test running
- Commands to cancel
the test/test bundle and to save the results to the RMS
As you can see here
this new interface resembles the interface of device-side harnesses,
that are available for execution of mobile
variations of JUnit .
To view results stored in the RMS, use the
RMSReader MIDlet application. It has a simple interface that uses
a Command to change the display either to "view log" or to "view ref"
There should be better solutions then one that we chosen for canceling a hanging test.
The 'cancel test' functionality requires each test to be run in a
separate thread. Since Connected Limited Device Configuration has no
method to interrupt threads in order to break test execution and
proceed to the next test, a special flag is used to mark the test
thread as canceled. The canceled thread is set to minimum priority and
the agent starts to execute the next test.