The purpose of the Android Sample App is to provide useful example code to help you integrate your platform implementation with the Alexa Auto SDK. The Android Sample App provides an example of creating and configuring an instance of the Engine, overriding the default implementation of each Alexa Auto Platform Interface, and registering those custom interface handlers with the Engine. It includes one default example implementation of authorizing with Alexa Voice Service (AVS) via Code Based Linking (CBL). The Android Sample App also includes detailed logs for interactions with the Alexa Auto SDK and convenience features for viewing those logs in the application, as well as UI elements relevant to each Platform Interface implementation.
Table of Contents:
- Prerequisites
- Enabling Optional Device Capabilities
- Setting up the Android Sample App
- Running the Android Sample App
- Using the Android Sample App
- Debugging Notes
- Known Issues
To use the Android Sample App, you need an Amazon Developer account.
After creating an Amazon developer account, you'll need to register a product and create a security profile on the AVS developer portal.
When you follow the instructions to fill in the product information:
- Use your own custom information, taking note of the Product ID, as this information is required for your configuration file .
- Be sure to select Automotive from the Product category pull-down.
When you follow the instructions to set up your security profile, generate a Client ID and take note of it, as this information is required for your configuration file.
The Android Sample App requires a configuration file that contains information device client information required for authorization with AVS.
In order to use certain optional Alexa Auto SDK functionality (for example, AmazonLite Wake Word, Alexa Communications, Local Voice Control (LVC), Device Client Metrics (DCM), or Voice Chrome for Android) with the Sample App, your product must be whitelisted by Amazon. Copy the product's Amazon ID from the Developer Console and follow the whitelisting directions on the Need Help? page.
Create your project directory (if you do not already have one):
$ mkdir ~/Projects
$ cd ~/Projects
Clone the alexa-auto-sdk
repository into your project directory:
$ git clone https://github.com/alexa/alexa-auto-sdk.git
$ cd alexa-auto-sdk
$ export AAC_SDK_HOME=$(pwd)
You must populate the app_config.json
file with the configuration information required to authorize your device profile with AVS:
-
Open the
app_config.json
file in your favorite editor. -
For the the
"clientId"
parameter, replace"<YOUR DEVICE'S CLIENT ID>"
with the Client ID that you generated when you set up your security profile for your development device. -
For the
"productId"
parameter, replace"<YOUR DEVICE'S PRODUCT ID>"
with the Product ID that you entered when you filled in the product information for your development device.Note: The
"clientId"
and"productId"
must correspond to a development device profile that you created as an automotive product by selecting theAutomotive
product category when you filled in the product information.
**Note:**You can leave
"amazonId"
set to its placeholder value. This parameter is not required unless you are using the optional Device Client Metrics (DCM) extension.
Choose one of the following two options to include the Alexa Auto SDK build dependencies. These options correspond to two build flavors: remote and local.
Note: If you want to implement any optional modules (such as wake word support, Alexa Communications, Local Voice Control (LVC), Device Client Metrics (DCM), or Voice Chrome for Android), you must choose option 2 and build the platform AARs and the core-sample AAR using the Auto SDK Builder. The prebuilt AARs available in JCenter are for the default Auto SDK modules only.
Note: The Auto SDK requires Gradle version 4.10.1 - 5.6.2 (5.1.1 - 5.6.2 if you are using Android Studio).
The pre-built platform AARs for the default Auto SDK modules and the core-sample AAR required to run the Android Sample App are available in the JCenter repo.
The Sample App builder scripts are configured to use JCenter to always pull the latest release artifacts during compilation. To run the builder scripts, issue the following command:
$ ./gradlew assembleRemoteRelease
Alternatively, you can manually download the platform AARs and core-sample AAR from the JCenter repo to
${AAC_SDK_HOME}/samples/android/app/src/main/libs
Important! If you choose to download the AAR files manually, you may need to create a libs directory, and you must make sure to download the platform AARs and core-sample AAR corresponding to the version of Alexa Auto SDK that you are using.
Once you have downloaded the AARs, follow the steps to configure the project in Android Studio.
If you do not use the pre-built platform AARs and core-sample AAR, you must build the Android-specific binaries for the Alexa Auto SDK library project to link to and build the Android platform AARs and the core-sample AAR. See the Alexa Auto SDK Builder instructions for details about how to build the Alexa Auto SDK dependencies for Android.
Important! The Android Sample App always compiles the platform AARs and core-sample AAR from
${AAC_SDK_HOME}/samples/android/app/src/main/libs
and${AAC_SDK_HOME}/builder/deploy/aar
when available.
If you decide to use the Alexa Auto SDK Builder to generate the platform AARs and the core-sample AAR, make sure to remove all Alexa Auto SDK platform AARs from
${AAC_SDK_HOME}/samples/android/app/src/main/libs
. You can keep any third party libraries.
Once you have generated the platform AARs and the core-sample AAR, issue the following command to run the builder scripts:
$ ./gradlew assembleLocalRelease
Alternatively, you can follow the steps to configure the project in Android Studio.
Note: If you get gradle-related errors (such as
Could not open settings remapped class cache...
,Could not open settings generic class cache...
, orBUG! exception in phase 'semantic analysis' in source unit 'BuildScript' Unsupported class file major version 57
) when attempting to build the Android Sample App using the Alexa Auto SDK Builder, install Java 8 and point the java_home directory to 1.8:export JAVA_HOME=$(/usr/libexec/java_home -v 1.8)
.
Note: The Auto SDK requires Android Studio version 3.4.1+. In addition, you must ensure that the Gradle version you are using is compatible with the Android Studio version you are using. See the Android Gradle Plugin Release Notes for information about matching Android Studio versions to Gradle versions.
Note: These instructions assume that you have populated the
app/assets/app_config.json
file with information specific to your device and included the Alexa Auto SDK build dependencies (AAR files).
- Launch Android Studio and select Open an existing Android Studio project.
- Open the
${AAC_SDK_HOME}/samples/android
folder and click the Open button. (Tested with Android Studio version 3.x) - Under Build Variants, select the appropriate build flavor (localRelease or remoteRelease).
Note: Android Studio builds and signs the Android Package File.
Use Android Studio to install and run the Sample App on your target device.
- Minimum tested Android API level 22
- Minimum tested and recommended Android NDK Revision 16b
When the Sample App launches, it displays a code and a URL in a box. Follow the on-screen prompt to authenticate with AVS using CBL:
- Open a browser and navigate to the URL displayed in the Sample App.
- In the browser, enter the code displayed in the Sample App.
- Click Continue and follow the onscreen instructions in the browser to complete the authentication.
The Sample App provides an example of how to create and configure an instance of the Engine, extend the Alexa Auto SDK Platform Interfaces, and register the interface implementations with the Engine. The Platform Interface implementations are located in the impl/
folder of the com.amazon.sampleapp
directory with the Handler
postfix. These classes extend the JNI wrapper classes, which mirror the Alexa Auto C++ API. You can read more about these interfaces in the following documentation:
-
Phone Call Controller module README
Note: The Android Sample App includes a simulated local phone that leverages the Phone Call Controller module. However, it cannot currently use cellular voice connections paired with or installed in the host device. You can use this simulator as an example of how to implement the Phone Control interface on the host platform to perform actions such as dialing, hanging up, etc.
-
Alexa Presentation (APL) module README
Note: APL rendering on the Android Sample App requires a component that is available by request from your Amazon Solutions Architect (SA) or Partner Manager.
The Sample App GUI consists of a menu bar and a log console. The expandable menu icon in the menu bar opens an options menu to the right of the screen that contains GUI elements relevant to the Platform Interface implementations as well as the authentication UI. Interacting with Alexa and the Engine requires successful authentication with AVS. You can log out using the Log Out button in the options menu, which will clear the saved refresh token.
You can use the microphone icon on the menu bar to initiate a speech interaction with Alexa, either via tap-to-talk (press and release) or hold-to-talk (press and hold). You can also initiate an interaction with the Alexa wake word.
Initiate various interactions with Alexa and explore the options menu features and the log console to better understand the Alexa Auto SDK functionality. The logger implementation tags log messages in the following way:
[AVS]
refers to log messages from the AVS Device SDK.[AAC]
refers to log messages from the Alexa Auto SDK.[CLI]
refers to log messages from the Sample App itself.
Note: Some Alexa interactions will return data for rendering a Display Card for visual feedback. Card rendering in the Sample App is an example of parsing the payload of rendering calls to the TemplateRuntime Platform Interface. The Sample App implementation of these cards is not meant as a UI design guideline or requirement.
Every request to Amazon Voice Service (AVS) requires an Login with Amazon (LWA) access token. Code-Based Linking (CBL) is the recommended method to acquire access tokens and is demonstrated by the Android Sample App. See the CBL module README for details about the Auto SDK's implementation of CBL.
Your platform implementation should handle cases where a GPS location cannot be obtained by returning the UNDEFINED
value provided by the Auto SDK. In these cases, the Auto SDK does not report the location in the context, and your platform implementation should return a localization object initialized with UNDEFINED
values for latitude and longitude ((latitude,longitude) = (UNDEFINED
,UNDEFINED
)) in the context object of every SpeechRecognizer event.
In order to view Google Map locations on navigation requests via the Sample App, you must input your own Google Maps API key.
- For details about how to get the API key, see the Google Maps Documentation.
- Put the API key into the
AndroidManifest.xml
file under the value section of the meta data tagcom.google.android.geo.API_KEY
.
The Sample App does not configure SiriusXM as a local media source by default. If you need the SiriusXM local media source, you must enable and build it. To do this, uncomment the Mock SIRIUSXM platform handler registration in the MainActivity.java
class, then rebuild the Sample App.
Note: When SiriusXM is present as a local media source, the cloud defaults to local SiriusXM only and blocks any use of the cloud SiriusXM service even if the local implementation/service is unavailable or not enabled.
SpeakerManager
is now a configurable option, enabled by default. When not enabled, user requests to change the volume or mute now have an appropriate Alexa response, e.g. "Sorry, I can't control the volume on your device".
You can programmatically generate speaker manager configuration using the createSpeakerManagerConfig()
factory method, or provide the equivalent JSON values in a configuration file.
{
"aace.alexa": {
"speakerManager": {
"enabled": false
}
}
}
See the the Android Sample App MainActivity.java for an example of programmatically generating your speaker manager configuration.
ArrayList<EngineConfiguration> configuration = new ArrayList<EngineConfiguration>(Arrays.asList(
...
AlexaConfiguration.createSpeakerManagerConfig( false ),
...
));
Use Android Studio and press the Debug 'app' button on toolbar. This will launch the app on target device and attach the app with Android Studio debugger.
Debugging Java code is straightforward. Open the file and scroll to the method which you wish to debug. Click on to the vertical gray bar on the left of the source editor. This will set the breakpoint on the source line. Use the Android Studio UI to navigate the stack trace, watch variables etc., when breakpoint is hit.
Debugging instructions for the Alexa Auto SDK C++ library are same as the instructions for debugging Java code. The Alexa Auto SDK Android library is compiled and linked by Android Studio, so no special instructions are required.
To debug all external libraries which are not built by Android Studio, follow these instructions (at present, all the modules except the platform/android and sample/android are built outside Android Studio.)
Prerequisite: Install the Low Level Debugger (LLDB) package in Android Studio. You can find the LLDB component on the SDK Manager -> SDK Tools tab.
-
Use the builder script to generate a build with debug symbols. See the Builder README for documentation of builder options. For example, to build an Android x86-64 build with debugging symbols use:
builder/build.sh android -t androidx86-64 -g
-
Debug symbols are located at:
builder/deploy/android<arch>/aac-sdk-build-<arch>-android-<apilevel>-dbg.tar.gz
-
Extract the symbol tar: (
tar -xvf aac-sdk-build-<arch>-android-<apilevel>-dbg.tar.gz
). Extracted symbols are located here:builder/deploy/androidx86-64/opt/AAC/lib/.debug
-
Rebuild the platform AARs using:
builder/build.sh gradle -g
-
Rebuild the Sample App in Android Studio.
-
Launch the Sample App in Android studio by pressing the Debug 'app' button on the toolbar
-
Select View -> Tool Windows -> Debug
-
In the debug window, switch to the lldb tab.
-
To debug each external library, provide LLDB the location of the library's symbols using
add-dsym
. For example:(lldb) add-dsym ${AAC_SDK_HOME}/builder/deploy/androidx86-64/opt/AAC/lib/.debug/libAACECoreEngine.so
-
Use LLDB
image lookup
to locate the function to debug. For example:image lookup -vn EngineImpl::start
-
If you are debugging on macOS, then the
CompileUnit
(source file) output of LLDBimage lookup
points to the file system in the docker image. Use LLDBsettings set target.source-map
to direct LLDB to locate the source on the local file system instead. For example:
(lldb) settings set target.source-map /workdir/build/tmp-android-22/work/core2-32-linux-android/
aac-module-core/0.99.0-r0/src/engine/src/ ${AAC_SDK_HOME}/modules/core/engine/src/
Note: Remember to do this for each module you wish to debug.
-
From the LLDB
image lookup
output, locate the fully qualified unmangled name of the function ("Function: name = "). Set the breakpoint on the function with LLDBbreakpoint set
. For example:breakpoint set --name aace::engine::core::EngineImpl::start
-
When the breakpoint is hit, use the Android Studio debugger window to find call stacks, watch variables, etc. You can also set a further breakpoint using Android Studio editor, with either method described under Debugging Java Cpde, and use other features of LLDB as you need.
Note: If the LLDB window isn't visible in the debug tab, select Run -> Edit Configurations -> Debugger and change the
Debug Type
toNative
.
- In the Alexa companion app, if you change the timezone of the device to a value already listed in the sample app timezone drop down, the timezone field of the sample app is not updated to the new value until the app is restarted.
- After you forcibly close an external media app and then press the "next" playback button, the media playback gets into a "no content playing" state in which all GUI playback control buttons no longer work.
- Alexa Presentation Language (APL) rendering does not support ssmlToSpeech and ssmlToText transformers. Therefore, some skills, like Jeopardy, will not provide the expected user experience.
- The sample app goes into listening mode when calling 911, as Alexa prompts the user to say "Alexa, Cancel" to stop. This is due to the Android platform not providing audio loopback to cancel out self triggers. See the Loopback Detector README for an example of using the Auto SDK with the AmazonLite wake word to cancel out self references.
- The
AudioOutput.prepare()
method implementation inAudioOutputHandler
blocks returning until it reads the full audio attachment from the Engine on the caller's thread. If working properly, the implementation returns quickly and reads the attachment on a separate thread. - A generic error TTS is returned when the user initiates offline contact calling before uploading contacts to the cloud on the sample app with the optional Local Voice Control extension.
- Particular sections of the Flash Briefing do not resume properly from a paused state. Attempting to play after pause in some cases may restart the media playback.
- Display card rendering is not adaptable to a variety of screen sizes.
- The sample app does not implement managing inter-app audio focus. As a result, other apps do not recognize its audio playback appropriately.
- Alexa dialog playback may stop abruptly when switching between Wi-Fi and mobile data.
- The sample app disconnects from AVS after remaining idle for some time and takes a while to reconnect.
- Music service provider logos in the SVG format are not rendered during the music playback.