Quantcast
Channel: Android*
Viewing all 531 articles
Browse latest View live

Preview do Intel® XDK New

$
0
0

No último dia 10 de Setembro, foi anunciado o Intel® XDK NEW, a nova versão do Intel® XDK, com diversas novidades que simplificam mais ainda o desenvolvimento de Apps híbridos em HTML5.

Para quem quiser dar uma olhada em como está ficando esta nova versão, disponibilizamos um preview dela através deste link. O preview está disponível apenas para Windows, mas na versão final da ferramenta teremos suporte a Windows*, Linux* (Ubuntu*) e OS X*.

Estas foram as principais mudanças na ferramenta:

  • Eliminada a dependência do Java* e do Chrome* (nada contra eles !). A nova versão do XDK foi desenvolvida usando o node-webkit, o que nos permite rodar o XDK em praticamente qualquer plataforma, além de nos dar maior controle sobre o gerenciamento local de arquivos, além do controle de arquivos na nuvem. Esta é uma das mudanças que atende a diversas sugestões que recebemos dos nossos usuários nos últimos meses. Aproveitando a oportunidade, gostaríamos de agradecer ao Roger Wang por criar e manter o node-webkit. Seu trabalho e seu projeto é de grande importância aos desenvolvedores web/HTML5.
  • Uma nova ferramenta para construção de UI – App Designer – totalmente integrada ao XDK New. Ela suporta mais frameworks (jQuery Mobile*, Twitter* Bootstrap, e App Framework) permitindo mais escolhas para o design de UI, e permite ainda o “round-trip”, ou seja, você pode criar e modificar a sua UI a qualquer momento de forma totalmente integrada (quem usou o App Starter do XDK sabe bem do que estou falando aqui). Falando em App Starter, nesta primeira versão do XDK New, o App Starter vai estar disponível apenas na nuvem, mas planejamos integrá-lo ao XDK.
  • Um novo editor de código, o Brackets*. Gostaríamos de agradecer aqui aos contribuidores do www.brackets.io, que nos oferece bastante ajuda na edição de código com syntax highlighting, auto-completion, JSLint* e muito mais. Claro que você pode usar o editor de código da sua preferência no dia a dia e importar os arquivos gerados por ele, mas se ainda não tem um editor predileto, vale a pena dar uma olhada no Brackets* que incluímos no XDK New.
  • Um novo emulador baseado no Ripple* –  a AppMobi havia feito um grande trabalho no emulador atual do XDK, mas gostaríamos de adicionar suporte ao Cordova além das APIs da appMobi. Com esta mudança, você tem mais flexibilidade para adicionar APIs e usar recursos mais interessantes de simulação de dispositivos e plataformas. Uma das mudanças que mais gostei no emulador, é que agora ele, quando integrado ao App Framework, mostra como fica a UI do seu App caso você deixe a adaptação de temas a cargo do App Framework (sim, você escreve a UI uma única vez e o App Framework altera a sua apresentação gráfica de acordo com o sistema operacional e o tema utilizado, fazendo com que a sua UI fique “no padrão” do dispositivo do seu usuário, sem que você tenha que escrever uma única linha de código adicional para tratar isso).
  • Uma nova interface de usuário –  simplesmente reescrevemos tudo baseado em tecnologias web. Isso nos permitiu criar uma nova UI para facilitar ainda mais a utilização da ferramenta e esperamos com isso facilitar a criação e gerenciamento de projetos, desde o armazenamento local deles até o seu build na nuvem.
  • Suporte a mais plataformas – o Intel XDK New terá versões para Ubuntu* Linux*, Windows* e OS X*. Apenas o preview da versão Windows está disponível agora, mas em breve serão lançadas as outras versões.
  • Suporte ao Cordova 2.9 – você agora pode criar, testar, emular, depurar e fazer o build na nuvem de apps baseadas no Cordova (PhoneGap*), além de poder continuar usando usar as APIs da AppMobi.

Para mais detalhes sobre as novas funcionalidades do XDK New, recomendo uma olhada nesta página aqui e se quiser baixar e testar logo o nosso preview, clique aqui.

Se quiser dividir conosco a sua opinião sobre esta nova versão da ferramenta, ou se tiver alguma dificuldade em utilizá-la, temos uma área especial do nosso fórum dedicada a ela, e pode ser acessada aqui.

  • html5 Intel XDK
  • Icon Image: 

  • Mobility
  • Intel® XDK
  • HTML5
  • JavaScript*
  • HTML5
  • Laptop
  • Phone
  • Tablet
  • Developers
  • Partners
  • Professors
  • Students
  • Android*
  • Apple iOS*
  • Apple Mac OS X*
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Tizen*

  • Game Engines for Android

    $
    0
    0

    With Android continually increasing in popularity it is always interesting to take a look at the latest collection of game engines available for the paltform. I am also further intersted in seeing which game engines include x86 support since the number of Intel x86 based mobile devices continue to increase. There are many game engine choices out there, all with a different set of features, pricing, maturity, etc. After doing some research, I found a wide variety of game engines that can be used for creating games that run on Android* based mobile devices. Some engines provide x86 support, while others can be ported to support x86 devices without too much effort.

    Here is the ever expanding list of game engines I have collected information about. The list includes some features and details about each engine and an example game on Google Play if I could find one.

    • Project Anarchy by Havok - http://www.projectanarchy.com/
      • FREE Cross-Platform Engine and Toolkit for Mobile Game Developers
      • Develop and release titles on iOS, Android and Tizen for free.
      • Extendible C++ plugin based architecture
      • Includes Havok’s Vision Engine together with Havok’s leading Physics, Animation Studio and AI tools
      • Available now
      • jPCT-AE- http://www.jpct.net/jpct-ae/
        • A java 3D engine optimized for Android.
        • Nice set of features including 3DS, OBJ and other file support, skeletal animations, shader support, texture compression, collision detection, various lighting modes, transparency, fog, and more.
        • An all java game engine that supports x86 Android devices.
        • Free for personal and commercial use.
        • Example: https://play.google.com/store/apps/details?id=mk.grami.max
      • Libgdx - http://code.google.com/p/libgdx/
        • Cross platform (Windows, Linux, OSX and Android) 2D/3D Android engine. Build, run and iterate on the PC before deploying to phone.
        • C++ and Java based engine that easily ports to x86.
        • Box2d physics, TMX tile map, shaders, 2D particle system, sprite support, camera apis, OBJ and MD5 model loaders.
        • Full source code available for free.
        • C++/Java based engine that with a few minor changes I was able to run on x86 Android devices.
        • https://market.android.com/details?id=com.tani.penguinattack
      • gameplay – http://gameplay3d.org/index.php
        • Open-source cross-platform 3D engine aimed at the indie game developer ecosystem.
        • Supports BlackBerry 10 and PlayBook, Apple iOS 5+, Android NDK 2.3+, Microsoft Windows 7, Apple MacOS X, Linux
        • Full featured rendering system, node-based scen graph system, particle system, Bullet physics engine, audio and UI systems, etc.
        • Open sourced under the Apache 2.0 license
      • Esenthel Engine - http://www.esenthel.com/?id=overview
        • Modern 2D/3D C++ based game engine (Windows, Mac, Android and iOS)
        • Available for unlimited trial if used non-commercially
        • Scripting and C++ support, multiple renderers, animation system, physics engine, streaming game engine, GUI, etc.
        • DirectX 9,10,11, OpenGL, OpenGL ES 2.0, PhysX 3, PhysX 2, Bullet physics integration
        • Tools include a world editor, model editor, data browser, code editor and more.
        • One-click cross platform publishing
        • Android native x86 support
        • https://play.google.com/store/apps/developer?id=Esenthel
      • App game kit -http://www.appgamekit.com/
        • Cross platform (iOS, Windows, MacOS, Android, BlackBerry)
        • A 2D OpenGL based game engine with Box2D. Include support for Sprites, particles, input APIs, sound and music.
        • Looks like it is a C++ based engine that should easily port to x86 Android devices.
        • Write game code in BASIC or has an available upgrade option for writing native C++ code.
        • Free to try, license purchase required to publish.
        • https://market.android.com/details?id=com.texasoftreloaded.theblackhole
      • Orx - http://orx-project.org/
        • Orx is an open source, portable, lightweight, plugin-based, data-driven and extremely easy to use 2D-oriented game engine.
        • Cross platform (iPhone, iPad, Mac, Windows, Linux, Android) game engine.
        • Camera APIs, animations, sound, sprite rendering and data driven for fast and easy prototyping and development.
        • Free open source.
        • C++ based engine that should easily port to x86 Android devices.
        • Example: https://market.android.com/details?id=lyde.sik.gravity
      • DX Studio - http://www.dxstudio.com/
        • 3D game engine with editor.
        • Android limited features now supported.
        • C++ based engine that should easily port to x86 Android devices.
        • Currently offered for free.
      • SIO2 Engine – http://sio2interactive.com/
        • 2D/3D cross platform (iOS, Android, bada, WebOS, WIN32 ) game engine.
        • Iterate via simulator on PC
        • Features lua support, exporters for various 3d modeling tools, Bullet physics engine, path finding, sound apis, shader support, animation and networking support.
        • C++ based engine that should easily port to x86 Android devices.
        • Various licenses available for purchase, free to trial.
      • Unigine - http://unigine.com/products/unigine/
        • 3D cross platform (Windows, Linux, Max, PS3, iOS, Android)
        • Physics, scripting, etc. Unclear what features are supported for mobile.
        • Evaluation available to companies working on commercial projects. License purchase required.
        • C++ based engine that should easily port to x86 Android devices.
        • Example: http://www.demolicious-game.com/
      • Candroidengine - http://code.google.com/p/candroidengine/
        • 2D Java engine.
        • Sprites, tile animation, background APIs, etc.
        • Dalvik only engine that should work on all architectures.
        • Full source code available for free.
      • Mages Engine - http://code.google.com/p/mages/
        • multiplayer client/server game engine
        • Java engine that should work on all architectures.
        • Full source code available for free.
      • Unreal Development kit - http://udk.com/
        • No Android support in UDK. The full license on Unreal Engine needed for Android support.
        • This is the free edition of Unreal Engine 3 that provides access to the 3D game engine.
        • UDK supports iOS and Windows only.
        • Free to use UDK for noncommercial and educational use


      The great thing about Android on x86 is that it opens a new class of devices for all of the games built on these engines. Unfortunately not all these game engines have support for x86 native binaries but it’s probably just a matter of time. x86 support is available in the latest Android NDK. Porting to x86 for some of these engines may simply be a recompile. We have created a couple of documents to guide you and have forums available to help along the way.


      This post continues to gain in popularity, as does the number of Android game engine choices. I will continue to update this post with the latest information from user comments and news from the web. I hope this list helps as a great starting point for those thinking about writing an Android game. Feel free to post comments about game engines I am missing or any updated information you find.

    • Android x86
    • Game Engines
    • gaming
    • Mobile Development
    • Mobile Games
    • Icon Image: 

    • Game Development
    • Android*
    • Phone
    • Developers
    • Android*
    • How to Enable Intel® Wireless Display Differentiation for Miracast* on Intel® Architecture phone

      $
      0
      0

      Introduction

      Wireless display technology is becoming more and more popular on Android* phones and tablets since Google started supporting Miracast on Android 4.2. Wireless display technology makes it easier for end users to expand their phone’s LCD size. I think there is a good chance that ISVs will integrate the wireless display feature into their applications, especially games and video players.

      But how to realize Intel® wireless display differentiation for Miracast on Android for x86 phones is a big challenge for enabling ISVs. This article introduces how to enable dual display differentiation for Miracast by showing a case study of enabling iQiyi online video player and WPS office on K900. We hope that lots of amazing applications can be enabled in the future.

      What is Miracast

      The Wi-Fi Alliance officially announced Wi-Fi* CERTIFIED Miracast on 2012.9.19 , which is a groundbreaking solution for seamlessly displaying video between devices, without cables or a network connection. Users can do things like view pictures or videos from a smartphone on a big screen television, share a laptop screen with the conference room projector in real time, and watch live programs from a home cable box on a tablet. Miracast connections are formed using Wi-Fi CERTIFIED Wi-Fi Direct*, so access to a Wi-Fi network is not needed—the ability to connect is inside Miracast certified devices.

      The connection of Miracast is based on a Wi-Fi direct, peer to peer connection. The Wi-Fi-based Miracast architecture is shown below.



      Figure 1: Miracast* architecture

      There are four modes of Miracast connection as shown below:



      Figure 2: Miracast* connection modes

      With a Miracast connection, you can enable connectivity across devices without Wi-Fi AP infrastructure, as topology 1 shows. You can also connect to a display via an adaptor while connecting to an AP, as topology 2 shows. It is very convenient to watch online video at home with this mode. If you have a smart TV that also supports Miracast, your TV, AP, and your smartphone can even connect to each other, as topology 4 shows.

      According to the Miracast standard, the interactive mode of source and display devices can be diagrammed as follows:



      Figure 3: Miracast* session management

      Source and display devices discover each other’s Miracast capabilities prior to connection setup. The connection is based on Wi-Fi direct or TDLS. Source and display devices determine the parameters for the Miracast session based on capability negotiation. The negotiation process is based on the TCP connection. Source devices will transfer content to display devices via MPEG2-TS format based on the UDP connection.

      Miracast wireless streaming-supported formats are listed in Table 4.

      Table 4: Miracast* streaming format

      Miracast on Android 4.2

      Google started supporting Miracast on Android version 4.2. End users can share movies, photos, YouTube videos, and anything that’s on your screen with HDTV via wireless display technology. The external HDTV is listed as an external display.

      Now Miracast on Android supports clone mode and presentation mode, as shown below:



      Figure 5: Miracast* support modes

      Clone mode duplicates the phone display on the remote display. The resolution of the frames sent to the adapter matches the resolution of the local display. In this mode, both local and remote displays are turned on and show the same content.

      In presentation mode, Android now allows your app to display unique content on additional screens that are connected to the user’s device over either a wired connection or Wi-Fi. The apps must be modified to support this mode, or they will default to clone mode.

      Develop differentiation for Miracast on Intel Architecture (IA) phone

      Intel’s wireless display solution on Android phones and tablets is fully compatible with Miracast. We also enable some apps with differentiation usages to Miracast on IA phone.

      The first one is to enable iQiyi to realize the video background streaming function. Users can send video to a remote display at 1080p resolution using a iQiyi app that enables background streaming while users can navigate out of the app and play 1080p video on the local screen or use any other application, including sending email or accessing their browser without any disruption to background playback, as shown below:



      Figure 6: iQiyi video BGM function

      The second one is to enable WPS office to realize split the UI function on both local and remote displays. When connecting to TV via wireless display, the enabled WPS office app can show PPT slides on the remote screen while showing PPT notes on the phone’s screen, which is very convenient for the speaker. We plan to add a timer clock on the phone’s screen to give a time reminder to the speaker in the future.



      Figure 7: WPS office split UI function

      These two differentiation usages are developed based on Miracast’s Presentation mode using the phone’s IA hardware capability. The two applications have been uploaded to Intel AppUp® for end users to download and install on their IA phones.

      Case study: How to enable dual display differentiation usages

      In this section, I will introduce how to realize a video background streaming function based on our experience of enabling a iQiyi app.

      As we know, to realize the video BGM function, the key difficulty is to get a service to play video in the background and deal with the surface view or video view correctly. When users press the home key, the surface view or video view will be destroyed automatically, so we have to apply a secondary display to show the background streaming video. The program flowchart is shown below:



      Figure 8: Background video streaming flow chart

      To create unique content for a secondary display, extend the Presentation class and implement the onCreate() callback. Within onCreate(), specify your UI for the secondary display by calling setContentView(). As an extension of the Dialog class, the Presentation class provides the region in which your app can display a unique UI on the secondary display.

      There are two methods for applying the secondary display for your presentation. Use either the DisplayManager or MediaRouter APIs. The easiest way to choose a presentation display is to use the MediaRouter API. The media router service keeps track of which audio and video routes are available on the system. The media router recommends the preferred presentation display that the application should use if it wants to show content on the secondary display.

      Here's how to use the media router to create and show a presentation on the preferred presentation display using getPresentationDisplay().

       MediaRouter mediaRouter = (MediaRouter) context.getSystemService(Context.MEDIA_ROUTER_SERVICE);
       MediaRouter.RouteInfo route = mediaRouter.getSelectedRoute();
       if (route != null) {
           Display presentationDisplay = route.getPresentationDisplay();
           if (presentationDisplay != null) {
               Presentation presentation = new MyPresentation(context, presentationDisplay);
               presentation.show();
           }
       }
      

      Another way to choose a presentation display is to use the DisplayManager API directly. The display manager service provides functions to enumerate and describe all displays that are attached to the system including displays that may be used for presentations.

      The display manager keeps track of all displays in the system. Here's how to identify suitable displays for showing presentations using getDisplays(String) and the DISPLAY_CATEGORY_PRESENTATION category.

       DisplayManager displayManager = (DisplayManager) context.getSystemService(Context.DISPLAY_SERVICE);
       Display[] presentationDisplays = displayManager.getDisplays(DisplayManager.DISPLAY_CATEGORY_PRESENTATION);
       if (presentationDisplays.length > 0) {
            Display display = presentationDisplays[0];
           Presentation presentation = new MyPresentation(context, presentationDisplay);
           presentation.show();
       }
      

      Developers can Reference the presentation demo code in Android SDK shown as below:

      \sdk\sources\android-17\android\app\Presentation.java

      Summary

      Besides the selling point of Intel Inside® for IA-based phones and tablets, the wireless display feature may become a shining point. ISVs should take notice and develop more innovative usages based on wireless display, especially the dual display differentiation usages.

      Reference

      1. http://www.wi-fi.org
      2. http://developer.android.com/about/versions/android-4.2.html
      3. Wi-Fi_Display_Technical_Specification_v1.0.0

      Copyright © 2013 Intel Corporation. All rights reserved.

      *Other names and brands may be claimed as the property of others.

    • Developers
    • Android*
    • Phone
    • URL
    • Developing Sensor Applications on Intel® Atom™ Processor-Based Android* Phones and Tablets

      $
      0
      0

      Download Article

      Developing Sensor Applications on Intel® Atom™ Processor-Based Android* Phones and Tablets [PDF 607KB]

      Developing Sensor Applications on Intel® Atom™ Processor-Based Android* Phones and Tablets


      This guide provides application developers with an introduction to the Android Sensor framework and discusses how to use some of the sensors that are generally available on phones and tablets based on the Intel® Atom™ processor. Among those discussed are the motion, position, and environment sensors. Even though GPS is not strictly categorized as a sensor in the Android framework, this guide discusses GPS-based location services as well. The discussion in this guide is based on Android 4.2, Jelly Bean.

      Sensors on Intel® Atom™ Processor-Based Android Phones and Tablets


      The Android phones and tablets based on Intel Atom processors can support a wide range of hardware sensors. These sensors are used to detect motion and position changes, and report the ambient environment parameters. The block diagram in Figure 1 shows a possible sensor configuration on a typical Intel Atom processor-based Android device.


      Figure 1. Sensors on an Intel® Atom™–based Android system

      Based on the data they report, we can categorize Android sensors into the classes and types shown in Table 1.

      Motion SensorsAccelerometer
      (TYPE_ACCELEROMETER)
      Measures a device’s accelerations in m/s2Motion detection
      Gyroscope
      (TYPE_GYROSCOPE)
      Measures a device’s rates of rotationRotation detection
      Position SensorsMagnetometer
      (TYPE_MAGNETIC_FIELD)
      Measures the Earth’s geomagnetic field strengths in µTCompass
      Proximity
      (TYPE_PROXIMITY)
      Measures the proximity of an object in cmNearby object detection
      GPS
      (not a type of
      android.hardware.Sensor)
      Gets accurate geo-locations of the deviceAccurate geo-location detection
      Environment SensorsALS
      (TYPE_LIGHT)
      Measures the ambient light level in lxAutomatic screen brightness control
      BarometerMeasures the ambient air pressure in mbarAltitude detection

      Table 1. Sensor Types Supported by the Android Platform
       

      Android Sensor Framework


      The Android sensor framework provides mechanisms to access the sensors and sensor data, with the exception of the GPS, which is accessed through the Android location services. We will discuss this later in this paper. The sensor framework is part of the android.hardware package. Table 2 lists the main classes and interfaces of the sensor framework.

      NameTypeDescription
      SensorManagerClassUsed to create an instance of the sensor service. Provides various methods for accessing sensors, registering and unregistering sensor event listeners, and so on.
      SensorClassUsed to create an instance of a specific sensor.
      SensorEventClassUsed by the system to publish sensor data. It includes the raw sensor data values, the sensor type, the data accuracy, and a timestamp.
      SensorEventListenerInterfaceProvides callback methods to receive notifications from the SensorManager when the sensor data or the sensor accuracy has changed.

      Table 2. The Android Platform Sensor Framework

      Obtaining Sensor Configuration

      Device manufacturers decide what sensors are available on the device. You must discover which sensors are available at runtime by invoking the sensor framework’s SensorManager getSensorList() method with a parameter “Sensor.TYPE_ALL”. Code Example 1 displays a list of available sensors and the vendor, power, and accuracy information of each sensor.

      package com.intel.deviceinfo;
      	
      import java.util.ArrayList;
      import java.util.HashMap;
      import java.util.List;
      import java.util.Map;
      
      import android.app.Fragment;
      import android.content.Context;
      import android.hardware.Sensor;
      import android.hardware.SensorManager;
      import android.os.Bundle;
      import android.view.LayoutInflater;
      import android.view.View;
      import android.view.ViewGroup;
      import android.widget.AdapterView;
      import android.widget.AdapterView.OnItemClickListener;
      import android.widget.ListView;
      import android.widget.SimpleAdapter;
      	
      public class SensorInfoFragment extends Fragment {
      	
          private View mContentView;
      	
          private ListView mSensorInfoList;	
          SimpleAdapter mSensorInfoListAdapter;
      	
          private List<Sensor> mSensorList;
      
          private SensorManager mSensorManager;
      	
          @Override
          public void onActivityCreated(Bundle savedInstanceState) {
              super.onActivityCreated(savedInstanceState);
          }
      	
          @Override
          public void onPause() 
          { 
              super.onPause();
          }
      	
          @Override
          public void onResume() 
          {
              super.onResume();
          }
      	
          @Override
          public View onCreateView(LayoutInflater inflater, ViewGroup container,
                  Bundle savedInstanceState) {
              mContentView = inflater.inflate(R.layout.content_sensorinfo_main, null);
              mContentView.setDrawingCacheEnabled(false);
      	
              mSensorManager = (SensorManager)getActivity().getSystemService(Context.SENSOR_SERVICE);
      	
              mSensorInfoList = (ListView)mContentView.findViewById(R.id.listSensorInfo);
      		
              mSensorInfoList.setOnItemClickListener( new OnItemClickListener() {
      			
                  @Override
                  public void onItemClick(AdapterView<?> arg0, View view, int index, long arg3) {
      				
                      // with the index, figure out what sensor was pressed
                      Sensor sensor = mSensorList.get(index);
      				
                      // pass the sensor to the dialog.
                      SensorDialog dialog = new SensorDialog(getActivity(), sensor);
      
                      dialog.setContentView(R.layout.sensor_display);
                      dialog.setTitle("Sensor Data");
                      dialog.show();
                  }
              });
      		
              return mContentView;
          }
      	
          void updateContent(int category, int position) {
              mSensorInfoListAdapter = new SimpleAdapter(getActivity(), 
      	    getData() , android.R.layout.simple_list_item_2,
      	    new String[] {
      	        "NAME",
      	        "VALUE"
      	    },
      	    new int[] { android.R.id.text1, android.R.id.text2 });
      	mSensorInfoList.setAdapter(mSensorInfoListAdapter);
          }
      	
      	
          protected void addItem(List<Map<String, String>> data, String name, String value)   {
              Map<String, String> temp = new HashMap<String, String>();
              temp.put("NAME", name);
              temp.put("VALUE", value);
              data.add(temp);
          }
      	
      	
          private List<? extends Map<String, ?>> getData() {
              List<Map<String, String>> myData = new ArrayList<Map<String, String>>();
              mSensorList = mSensorManager.getSensorList(Sensor.TYPE_ALL);
      		
              for (Sensor sensor : mSensorList ) {
                  addItem(myData, sensor.getName(),  "Vendor: " + sensor.getVendor() + ", min. delay: " + sensor.getMinDelay() +", power while in use: " + sensor.getPower() + "mA, maximum range: " + sensor.getMaximumRange() + ", resolution: " + sensor.getResolution());
              }
              return myData;
          }
      }

      Code Example 1. A Fragment that Displays the List of Sensors**

      Sensor Coordinate System

      The sensor framework reports sensor data using a standard 3-axis coordinate system, where X, Y, and Z are represented by values[0], values[1], and values[2] in the SensorEvent object, respectively.

      Some sensors, such as light, temperature, proximity, and pressure, return only single values. For these sensors only values[0] in the SensorEvent object are used.

      Other sensors report data in the standard 3-axis sensor coordinate system. The following is a list of such sensors:

      • Accelerometer
      • Gravity sensor
      • Gyroscope
      • Geomagnetic field sensor

      The 3-axis sensor coordinate system is defined relative to the screen of the device in its natural (default) orientation. For a phone, the default orientation is portrait; for a tablet, the natural orientation is landscape. When a device is held in its natural orientation, the x axis is horizontal and points to the right, the y axis is vertical and points up, and the z axis points outside of the screen (front) face. Figure 2 shows the sensor coordinate system for a phone, and Figure 3 for a tablet.


      Figure 2. The sensor coordinate system for a phone


      Figure 3. The sensor coordinate system for a tablet

      The most important point regarding the sensor coordinate system is that the sensor’s coordinate system never changes when the device moves or changes its orientation.

      Monitoring Sensor Events

      The sensor framework reports sensor data with the SensorEvent objects. A class can monitor a specific sensor’s data by implementing the SensorEventListener interface and registering with the SensorManager for the specific sensor. The sensor framework informs the class about the changes in the sensor states through the following two SensorEventListener callback methods implemented by the class:

      onAccuracyChanged()

      and

      onSensorChanged()

      Code Example 2 implements the SensorDialog used in the SensorInfoFragment example we discussed in the section “Obtaining Sensor Configuration.”

      package com.intel.deviceinfo;
      
      import android.app.Dialog;
      import android.content.Context;
      import android.hardware.Sensor;
      import android.hardware.SensorEvent;
      import android.hardware.SensorEventListener;
      import android.hardware.SensorManager;
      import android.os.Bundle;
      import android.widget.TextView;
      
      public class SensorDialog extends Dialog implements SensorEventListener {
          Sensor mSensor;
          TextView mDataTxt;
          private SensorManager mSensorManager;
      
          public SensorDialog(Context ctx, Sensor sensor) {
              this(ctx);
              mSensor = sensor;
          }
      	
          @Override
          protected void onCreate(Bundle savedInstanceState) {
              super.onCreate(savedInstanceState);
              mDataTxt = (TextView) findViewById(R.id.sensorDataTxt);
              mDataTxt.setText("...");
              setTitle(mSensor.getName());
          }
      	
          @Override
          protected void onStart() {
              super.onStart();
              mSensorManager.registerListener(this, mSensor,  SensorManager.SENSOR_DELAY_FASTEST);
          }
      		
          @Override
          protected void onStop() {
              super.onStop();
              mSensorManager.unregisterListener(this, mSensor);
          }
      
          @Override
          public void onAccuracyChanged(Sensor sensor, int accuracy) {
          }
      
          @Override
          public void onSensorChanged(SensorEvent event) {
              if (event.sensor.getType() != mSensor.getType()) {
                  return;
              }
              StringBuilder dataStrBuilder = new StringBuilder();
              if ((event.sensor.getType() == Sensor.TYPE_LIGHT)||
                  (event.sensor.getType() == Sensor.TYPE_TEMPERATURE)||
                  (event.sensor.getType() == Sensor.TYPE_PRESSURE)) {
                  dataStrBuilder.append(String.format("Data: %.3fn", event.values[0]));
              }
              else{         
                  dataStrBuilder.append( 
                      String.format("Data: %.3f, %.3f, %.3fn", 
                      event.values[0], event.values[1], event.values[2] ));
              }
              mDataTxt.setText(dataStrBuilder.toString());
          }
      }

      Code Example 2.A Dialog that Shows the Sensor Values**

      Motion Sensors

      Motion sensors are used to monitor device movement, such as shake, rotate, swing, or tilt. The accelerometer and gyroscope are two motion sensors available on many tablet and phone devices.

      Motion sensors report data using the sensor coordinate system, where the three values in the SensorEvent object, values[0], values[1], and values[2], represent the x-, y-, and z-axis values, respectively.

      To understand the motion sensors and apply the data in an application, we need to apply some physics formulas related to force, mass, acceleration, Newton’s laws of motion, and the relationship between several of these entities in time. To learn more about these formulas and relationships, refer to your favorite physics textbooks or public domain sources.

      Accelerometer

      The accelerometer measures the acceleration applied on the device, and its properties are summarized in Table 3.

      SensorTypeSensorEvent
      Data (m/s2)
      Description
      AccelerometerTYPE_ACCELEROMETERvalues[0]
      values[1]
      values[2]
      Acceleration along the x axis
      Acceleration along the y axis
      Acceleration along the z axis

      Table 3. The Accelerometer

      The concept for the accelerometer is derived from Newton’s second law of motion:

      a = F/m

      The acceleration of an object is the result of the net external force applied to the object. The external forces include one that applies to all objects on Earth, gravity. It is proportional to the net force F applied to the object and inversely proportional to the object’s mass m.

      In our code, instead of directly using the above equation, we are more concerned about the result of the acceleration during a period of time on the device’s speed and position. The following equation describes the relationship of an object’s velocity v1, its original velocity v0, the acceleration a, and the time t:

      v1 = v0 + at

      To calculate the object’s position displacement s, we use the following equation:

      s = v0t + (1/2)at2

      In many cases we start with the condition v0 equal to 0 (before the device starts moving), which simplifies the equation to:

      s = at2/2

      Because of gravity, the gravitational acceleration, represented with the symbol g, is applied to all objects on Earth. Regardless of the object’s mass, g only depends on the latitude of the object’s location with a value in the range of 9.78 to 9.82 (m/s2). We adopt a conventional standard value for g:

      g = 9.80665 (m/s2)

      Because the accelerometer returns the values using a multidimensional device coordinate system, in our code we can calculate the distances along the x, y, and z axes using the following equations:

      Sx = AxT2/2
      Sy=AyT2/2
      Sz=AzT2/2

      Where Sx, Sy, and Sz are the displacements on the x axis, y axis, and z axis, respectively, and Ax, Ay, and Az are the accelerations on the x axis, y axis, and z axis, respectively. T is the time of the measurement period.

      public class SensorDialog extends Dialog implements SensorEventListener {
          …	
          private Sensor mSensor;
          private SensorManager mSensorManager;
      	
          public SensorDialog(Context context) {
              super(context);
              mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
              mSensor = mSensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER);
          …
      }

      Code Example 3. Instantiation of an Accelerometer**

      Sometimes we don’t use all three dimension data values. Other imes we may also need to take the device’s orientation into consideration. For example, for a maze application, we only use the x-axis and y-axis gravitational acceleration to calculate the ball’s moving directions and distances based on the orientation of the device. The following code fragment (Code Example 4) outlines the logic.

      @Override
      public void onSensorChanged(SensorEvent event) {
          if (event.sensor.getType() != Sensor.TYPE_ACCELEROMETER) {
              return;
          } 
      float accelX, accelY;
      …
      //detect the current rotation currentRotation from its “natural orientation”
      //using the WindowManager
          switch (currentRotation) {
              case Surface.ROTATION_0:
                  accelX = event.values[0];
                  accelY = event.values[1];
                  break;
              case Surface.ROTATION_90:
                  accelX = -event.values[0];
                  accelY = event.values[1];
                  break;
              case Surface.ROTATION_180:
                  accelX = -event.values[0];
                  accelY = -event.values[1];
                  break;
              case Surface.ROTATION_270:
                  accelX = event.values[0];
                  accelY = -event.values[1];
                  break;
          }
          //calculate the ball’s moving distances along x, and y using accelX, accelY and the time delta
              …
          }
      }

      Code Example 4.Considering the Device Orientation When Using the Accelerometer Data in a Maze Game**

      Gyroscope


      The gyroscope (or simply gyro) measures the device’s rate of rotation around the x , y, and z axes, as shown in Table 4. The gyroscope data values can be positive or negative. By looking at the origin from a position along the positive half of the axis, if the rotation is counterclockwise around the axis, the value is positive; if the rotation around the axis is clockwise, the value is negative. We can also determine the direction of a gyroscope value using the “right-hand rule,” illustrated in Figure 4.


      Figure 4. Using the “right-hand rule” to decide the positive rotation direction

      SensorTypeSensorEvent
      Data (rad/s)
      Description
      GyroscopeTYPE_GYROSCOPEvalues[0]
      values[1]
      values[2]
      Rotation rate around the x axis
      Rotation rate around the y axis
      Rotation rate around the z axis

      Table 4. The Gyroscope

      Code Example 5 shows how to instantiate a gyroscope.

      public class SensorDialog extends Dialog implements SensorEventListener {
          …	
          private Sensor mGyro;
          private SensorManager mSensorManager;
      	
          public SensorDialog(Context context) {
              super(context);
              mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
              mGyro = mSensorManager.getDefaultSensor(Sensor.TYPE_GYROSCOPE);
          …
      }

      Code Example 5.Instantiation of a Gyroscope**

      Position Sensors

      Many Android tablets support two position sensors: the magnetometer and the proximity sensors. The magnetometer measures the strengths of the Earth’s magnetic field along the x, y, and z axes, while the proximity sensor detects the distance of the device from another object.

      Magnetometer

      The most important usage of the magnetometer (described in Table 5) in Android systems is to implement the compass.

      SensorTypeSensorEvent
      Data (µT)
      Description
      MagnetometerTYPE_MAGNETIC_FIELDvalues[0]
      values[1]
      values[2]
      Earth magnetic field strength along the x axis
      Earth magnetic field strength along the y axis
      Earth magnetic field strength along the z axis

      Table 5. The Magnetometer

      Code Example 6 shows how to instantiate a magnetometer.

      public class SensorDialog extends Dialog implements SensorEventListener {
          …	
          private Sensor mMagnetometer;
          private SensorManager mSensorManager;
      	
          public SensorDialog(Context context) {
              super(context);
              mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
              mMagnetometer = mSensorManager.getDefaultSensor(Sensor.TYPE_MAGNETIC_FIELD);
          …
      }

      Code Example 6.Instantiation of a Magnetometer**

      Proximity

      The proximity sensor provides the distance between the device and another object. The device can use it to detect if the device is being held close to the user (see Table 6), thus determining if the user is on a phone call and turning off the display during the phone call.

      Table 6: The Proximity Sensor
      SensorTypeSensorEvent
      Data
      Description
      ProximityTYPE_PROXIMITYvalues[0]Distance from an object in cm. Some proximity sensors only report a Boolean value to indicate if the object is close enough.

      Code Example 7 shows how to instantiate a proximity sensor.

      public class SensorDialog extends Dialog implements SensorEventListener {
          …	
          private Sensor mProximity;
          private SensorManager mSensorManager;
      	
          public SensorDialog(Context context) {
              super(context);
              mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
              mProximity = mSensorManager.getDefaultSensor(Sensor.TYPE_PROXIMITY);
          …
      }

      Code Example 7.Instantiation of a Proximity Sensor**

      Environment Sensors

      The environment sensors detect and report the device’s ambient environment parameters, such as light, temperature, pressure, or humidity. The ambient light sensor (ALS) and the pressure sensor (barometer) are available on many Android tablets.

      Ambient Light Sensor (ALS)

      The ambient light sensor, described in Table 7, is used by the system to detect the illumination of the surrounding environment and automatically adjust the screen brightness accordingly.

      Table 7: The Ambient Light Sensor
      SensorTypeSensorEvent
      Data (lx)
      Description
      ALSTYPE_LIGHTvalues[0]The illumination around the device

      Code Example 8 shows how to instantiate the ALS.

      …	
          private Sensor mALS;
          private SensorManager mSensorManager;
      
          …	
              mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
              mALS = mSensorManager.getDefaultSensor(Sensor.TYPE_LIGHT);
          …

      Code Example 8.Instantiation of an Ambient Light Sensor**

      Barometer

      Applications can use the atmospheric pressure sensor (barometer), described in Table 8, to calculate the altitude of the device’s current location.

      Table 8: The Atmosphere Pressure Sensor
      SensorTypeSensorEvent
      Data (lx)
      Description
      BarometerTYPE_PRESSUREvalues[0]The ambient air pressure in mbar

      Code Example 9 shows how to instantiate the barometer.

      …	
          private Sensor mBarometer;
          private SensorManager mSensorManager;
      
          …	
              mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
              mBarometer = mSensorManager.getDefaultSensor(Sensor.TYPE_PRESSURE);
          …

      Code Example 9. Instantiation of a barometer**

      Sensor Performance and Optimization Guidelines

      To use sensors in your applications, you should follow these best practices:

      • Always check the specific sensor’s availability before using it
        The Android platform does not require the inclusion or exclusion of a specific sensor on the device. Before using a sensor in your application, always first check to see if it is actually available.
      • Always unregister the sensor listeners
        If the activity that implements the sensor listener is becoming invisible, or the dialog is stopping, unregister the sensor listener. It can be done via the activity’s onPause() method, or in the dialog’s onStop() method. Otherwise, the sensor will continue acquiring data and as a result drain the battery.
      • Don’t block the onSensorChanged() method
        The onSensorChanged() method is frequently called by the system to report the sensor data. You should put as little logic inside this method as possible. Complicated calculations with the sensor data should be moved outside of this method.
      • Always test your sensor applications on real devices
        All sensors described in this section are hardware sensors. The Android Emulator may not be capable of simulating a particular sensor’s functions and performance.

      GPS and Location


      GPS (Global Positioning System) is a satellite-based system that provides accurate geo-location information around the world. GPS is available on many Android phones and tablets. In many perspectives GPS behaves like a position sensor. It can provide accurate location data for applications running on the device. On the Android platform, GPS is not directly managed by the sensor framework. Instead, the Android location service accesses and transfers GPS data to an application through the location listener callbacks.

      This section only discusses the GPS and location services from a hardware sensor point of view. The complete location strategies offered by Android 4.2 and Intel Atom processor-based Android phones and tablets is a much larger topic and is outside of the scope of this section.

      Android Location Services

      Using GPS is not the only way to obtain location information on an Android device. The system can also use Wi-Fi*, cellular networks, or other wireless networks to get the device’s current location. GPS and wireless networks (including Wi-Fi and cellular networks) act as “location providers” for Android location services. Table 9 lists the main classes and interfaces used to access Android location services.

      Table 9: The Android Platform Location Service
      NameTypeDescription
      LocationManagerClassUsed to access location services. Provides various methods for requesting periodic location updates for an application, or sending proximity alerts
      LocationProviderAbstract classThe abstract super class for location providers
      LocationClassUsed by the location providers to encapsulate geographical data
      LocationListenerInterfaceUsed to receive location notifications from the LocationManager

      Obtaining GPS Location Updates

      Similar to the mechanism of using the sensor framework to access sensor data, the application implements several callback methods defined in the LocationListener interface to receive GPS location updates. The LocationManager sends GPS update notifications to the application through these callbacks (the “Don’t call us, we’ll call you” rule).

      To access GPS location data in the application, you need to request the fine location access permission in your Android manifest file (Code Example 10).

      <manifest …>
      …
          <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
      …  
      </manifest>

      Code Example 10.Requesting the Fine Location Access Permission in the Manifest File**

      Code Example 11 shows how to get GPS updates and display the latitude and longitude coordinates on a dialog text view.

      package com.intel.deviceinfo;
      
      import android.app.Dialog;
      import android.content.Context;
      import android.location.Location;
      import android.location.LocationListener;
      import android.location.LocationManager;
      import android.os.Bundle;
      import android.widget.TextView;
      
      public class GpsDialog extends Dialog implements LocationListener {
          TextView mDataTxt;
          private LocationManager mLocationManager;
      	
          public GpsDialog(Context context) {
              super(context);
              mLocationManager = (LocationManager)context.getSystemService(Context.LOCATION_SERVICE);
          }
      
          @Override
          protected void onCreate(Bundle savedInstanceState) {
              super.onCreate(savedInstanceState);
      	       mDataTxt = (TextView) findViewById(R.id.sensorDataTxt);
                mDataTxt.setText("...");
      		
              setTitle("Gps Data");
          }
      	
          @Override
          protected void onStart() {
              super.onStart();
              mLocationManager.requestLocationUpdates(
                  LocationManager.GPS_PROVIDER, 0, 0, this);
          }
      		
          @Override
          protected void onStop() {
              super.onStop();
              mLocationManager.removeUpdates(this);
          }
      
          @Override
          public void onStatusChanged(String provider, int status, 
              Bundle extras) {
          }
      
          @Override
          public void onProviderEnabled(String provider) {
          }
      
          @Override
          public void onProviderDisabled(String provider) {
          }
      
          @Override
          public void onLocationChanged(Location location) {
              StringBuilder dataStrBuilder = new StringBuilder();
              dataStrBuilder.append(String.format("Latitude: %.3f,   Logitude%.3fn", location.getLatitude(), location.getLongitude()));
              mDataTxt.setText(dataStrBuilder.toString());
      		
          }
      }

      Code Example 11. A dialog that Displays the GPS Location Data**

      GPS and Location Performance and Optimization Guidelines

      GPS provides the most accurate location information on the device. On the other hand, as a hardware feature, it consumes extra energy. It also takes time for the GPS to get the first location fix. Here are some guidelines you should follow when developing GPS and location-aware applications:

      • Consider all available location providers
        In addition to the GPS_PROVIDER, there is NETWORK_PROVIDER. If your applications only need coarse location data, you may consider using the NETWORK_PROVIDER.
      • Use the cached locations
        It takes time for the GPS to get the first location fix. When your application is waiting for the GPS to get an accurate location update, you can first use the locations provided by the LocationManager’s getlastKnownLocation() method to perform part of the work.
      • Minimize the frequency and duration of location update requests
        You should request the location update only when needed and promptly de-register from the location manager once you no longer need location updates.

      Summary


      The Android platform provides APIs for developers to access a device’s built-in sensors. These sensors are capable of providing raw data about the device’s current motion, position, and ambient environment conditions with high precision and accuracy. In developing sensor applications, you should follow the best practices to improve the performance and power efficiency.

      About the Author

          Miao Wei is a software engineer in the Intel Software and Services Group. He is currently working on the Intel® Atom™     processor scale-enabling projects.




      Copyright © 2013 Intel Corporation. All rights reserved.
      *Other names and brands may be claimed as the property of others.

      **This sample source code is released under the Intel Sample Source Code License Agreement

    • Developers
    • Android*
    • Android*
    • Intel® Atom™ Processors
    • Sensors
    • Phone
    • Tablet
    • URL
    • Developing Android* Applications with Voice Recognition Features

      $
      0
      0

      by Stanislav Pavlov

      Download Article

      Developing Android* Applications with Voice Recognition Features [PDF 398KB]

      Android can’t recognize speech, so a typical Android device cannot recognize speech either. Or, is there a way it can?

      The easiest way is to ask another application to do the recognition for us. Asking another application to do something in Android is called using intents.

      Our target device must have at least one application that can process the Intent for speech recognition, which is called by the RecognizerIntent.ACTION_RECOGNIZE_SPEECH action.

      One such app is Google Voice Search. It is one of the best recognizers available for Android and supports a lot of languages. This service requires Internet connection because the voice recognition occurs on Google servers. This app has a very simple Activity that informs users they can speak. The moment the user stops talking, the dialog is closed and our application (intent caller) receives an array of strings with the recognized speech.

      A voice recognition sample

      Let’s write a little sample app that demonstrates using voice search in applications.

      Our application needs to do these things:

      • Receive a request for voice recognition
      • Check the availability of application for speech recognizing
      • If speech recognizing is available, then call the intent for it and receive the results
      • If speech recognizing is not available, then show the dialog for installing Google Voice Search and redirect the user to Google Play, if he wants

      First, we create a class that implements the logic for speech recognition. Call this class SpeechRecognitionHelper where we declare a static, public function run() that will receive a request for launching a recognition:

      /**
       * A helper class for speech recognition
       */
      public class SpeechRecognitionHelper {
      
      /**
           * Running the recognition process. Checks availability of recognition Activity,
           * If Activity is absent, send user to Google Play to install Google Voice Search.
          * If Activity is available, send Intent for running.
           *
           * @param callingActivity = Activity, that initializing recognition process
           */
          public static void run(Activity callingActivity) {
              // check if there is recognition Activity
              if (isSpeechRecognitionActivityPresented(callingActivity) == true) {
                  // if yes – running recognition
                  startRecognition(callingActivity);
              } else {
                  // if no, then showing notification to install Voice Search
                  Toast.makeText(callingActivity, "In order to activate speech recognition you must install \"Google Voice Search\"", Toast.LENGTH_LONG).show();
                  // start installing process
                  installGoogleVoiceSearch(callingActivity);
              }
          }
      }
      

      As you can see, besides the run() function we need to implement three other functions:

      • isSpeechRecognitionActivityPresented – checks if the speech recognition application is present on the system
      • installGoogleVoiceSearch – initializes the Google Voice Search installation process
      • startRecognition – prepares the appropriate Intent and runs the recognition

      To check if the device has an application for speech recognition, we can use the queryIntentActivities method in class PackageManager. This method gives a list of activities that can process the specified Intent. To receive an instance of the PackageManager, we can use getPackageManager.

      Our code is shown below:

      isSpeechRecognitionActivityPresented

      /**
           * Checks availability of speech recognizing Activity
           *
           * @param callerActivity – Activity that called the checking
           * @return true – if Activity there available, false – if Activity is absent
           */
          private static boolean isSpeechRecognitionActivityPresented(Activity callerActivity) {
              try {
                  // getting an instance of package manager
                  PackageManager pm = callerActivity.getPackageManager();
                  // a list of activities, which can process speech recognition Intent
                  List activities = pm.queryIntentActivities(new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH), 0);
      
                  if (activities.size() != 0) {    // if list not empty
                      return true;                // then we can recognize the speech
                  }
              } catch (Exception e) {
      
              }
      
              return false; // we have no activities to recognize the speech
          }
      

      Now implement the startRecognition function. This function will form the appropriate Intent for launching the speech recognition Activity. You can find detailed information for how to do it on documentation page.

      Source code:

         /**
           * Send an Intent with request on speech 
           * @param callerActivity  - Activity, that initiated a request
           */
          private static void startRecognitionActivity(Activity callerActivity) {
      
              // creating an Intent with “RecognizerIntent.ACTION_RECOGNIZE_SPEECH” action
              Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
      
              // giving additional parameters:
              intent.putExtra(RecognizerIntent.EXTRA_PROMPT, "Select an application");    // user hint
              intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_WEB_SEARCH);    // setting recognition model, optimized for short phrases – search queries
              intent.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, 1);    // quantity of results we want to receive
      //choosing only 1st -  the most relevant 
      
              // start Activity ant waiting the result
              ownerActivity.startActivityForResult(intent, SystemData.VOICE_RECOGNITION_REQUEST_CODE);
          }
      

      And last, we’ll implement the installGoogleVoiceSearch. This function will show the dialog, asking the user if he wants to install Google Voice Search and send him to Google Play, if he does.

      /**
           * Asking the permission for installing Google Voice Search. 
           * If permission granted – sent user to Google Play
           * @param callerActivity – Activity, that initialized installing
           */
          private static void installGoogleVoiceSearch(final Activity ownerActivity) {
      
              // creating a dialog asking user if he want
              // to install the Voice Search
              Dialog dialog = new AlertDialog.Builder(ownerActivity)
                  .setMessage("For recognition it’s necessary to install \"Google Voice Search\"")    // dialog message
                  .setTitle("Install Voice Search from Google Play?")    // dialog header
                  .setPositiveButton("Install", new DialogInterface.OnClickListener() {    // confirm button
      
                      // Install Button click handler
                      @Override
                      public void onClick(DialogInterface dialog, int which) {
                          try {
                              // creating an Intent for opening applications page in Google Play
                              // Voice Search package name: com.google.android.voicesearch
                              Intent intent = new Intent(Intent.ACTION_VIEW, Uri.parse("market://details?id=com.google.android.voicesearch"));
                              // setting flags to avoid going in application history (Activity call stack)
                              intent.setFlags(Intent.FLAG_ACTIVITY_NO_HISTORY | Intent.FLAG_ACTIVITY_CLEAR_WHEN_TASK_RESET);
                              // sending an Intent
                              ownerActivity.startActivity(intent);
                           } catch (Exception ex) {
                               // if something going wrong
                               // doing nothing
                           }
                      }})
      
                  .setNegativeButton("Cancel", null)    // cancel button
                  .create();
      
              dialog.show();    // showing dialog
          }
      

      That’s about it. We run the speech recognition Activity. Then request the user’s permission to install Voice Search and send him to Google Play if he consents. One thing we still need to do and that is gather the voice recognition results.

      We send a request using the startActivityForResult function to gather results of the launched Activity. We also need to redefine a OnActivityResult method in our intent caller Activity. This can be done this way:

      // Activity Results handler
          @Override
          public void onActivityResult(int requestCode, int resultCode, Intent data) {
      
              // if it’s speech recognition results
              // and process finished ok
              if (requestCode == SystemData.VOICE_RECOGNITION_REQUEST_CODE && resultCode == RESULT_OK) {
      
                  // receiving a result in string array
                  // there can be some strings because sometimes speech recognizing inaccurate
                  // more relevant results in the beginning of the list
                  ArrayList matches = data.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
      
                  // in “matches” array we holding a results... let’s show the most relevant
                  if (matches.size() > 0) Toast.makeText(this, matches.get(0), Toast.LENGTH_LONG).show();
              }
      
              super.onActivityResult(requestCode, resultCode, data);
          }
      

      Now we’re ready

      The created class SpeechRecognitionHelper allows us to perform a speech recognition request by calling only one function run().

      All that is needed for adding a recognition feature is to add this class in our project and call the run function in needed place. And then implement processing text results by redefining the onActivityResult method for the Activity that initiated the recognition call.

      For additional information you can look at the Android Developers website. Here, you’ll find good examples showing how to do voice recognition, and importantly, how to get the available language list. You will need this list if you want to recognize a language other than the user’s default locale.

      For fast integration of voice input in to your app, you can download and use this code for the SpeechRecognitionHelper class.

      About the Authors

      Stanislav works in the Software & Service Group at Intel Corporation. He has 10+ years of experience in software development. His main interest is optimization of performance, power consumption, and parallel programming. In his current role as an Application Engineer providing technical support for Intel-based devices, Stanislav works closely with software developers and SoC architects to help them achieve the best possible performance on Intel platforms. Stanislav holds a Master's degree in Mathematical Economics from the National Research University Higher School of Economics.

      Mikhail is co-author of this blog and an Intel summer intern, who is studying computer science at Lobachevsky University. He likes to deep dive in to math and do Android programming tricks.

      Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
      Copyright © 2013 Intel Corporation. All rights reserved.
      *Other names and brands may be claimed as the property of others.

    • Voice Recognition
    • Google Voice
    • intent
    • applications
    • Developers
    • Android*
    • Android*
    • Intermediate
    • User Experience and Design
    • Phone
    • Tablet
    • URL
    • Development and Optimization for NDK-based Android Game Application on Platforms based on Intel® Architecture

      $
      0
      0

      The Android Native Development Kit (NDK) is a companion tool to the Android SDK and allows you to implement parts of your app using native-code languages such as C and C++.

      You can download the NDK toolkit: http://developer.android.com/tools/sdk/ndk/index.html

      NDK for X86 Instruction Set Architecture

      Android is an open-source operating system developed by Google. Currently, Android can be run on three families of instruction set architectures:  ARM, x86, and MIPS. X86 denotes a family of instruction set architectures based on the Intel 8086 CPU, introduced in 1978. Let’s describe the differences between X86 (also called Intel® architecture or IA) and the other chipsets that Android runs on from an application perspective.

      Android applications can be classified into two types:

      • Dalvik applications that include Java* code and use the official Android SDK API only and necessary resource files, such as .xml and .png, compiled into an APK file.
      • Android NDK applications that include Java code and resource files as well as C/C++ source code and sometimes assembly code. All native code is compiled into a dynamic linked library (.so file) and then called by Java in the main program using a JNI mechanism.

      Android Game Engine

      The game engine is a key module for game applications. There are several engines that run in Android, including 2D and 3D engines which are open source and commercial engines. Thus, it is difficult to migrate and develop Android-based games to run on the IA platform. Cocos2d-x and Unity 3D are the most popular game engines for Android platforms.

      Cocos2d-x is based on Cocos2d-iPhone and consists of expanding supported platforms, with multiple programming languages that share the same API structure. Since its introduction in July 2010, cocos2d-x has been downloaded over 500 million times. Giants in the mobile game industry such as Zynga, Glu, GREE, DeNA, Konami, TinyCo, Gamevil, HandyGames, Renren Games, 4399, HappyElements, SDO, and Kingsoft are using cocos2d-x.

      Unity 3D is a cross-platform game engine with a built-in IDE developed by Unity Technologies. It is used to develop video games for web plugins, desktop platforms, consoles, and mobile devices, and is utilized by over one million developers. It grew from an OS X supported game development tool in 2005 to a multi-platform game engine. The latest update, Unity 4.1, was released March 2013. It currently supports development for iOS, Android, Windows, Blackberry 10, OS X, Linux, web browsers, Flash*, PlayStation 3, Xbox 360, Windows Phone, and Wii.

      Developing Android NDK-based games on IA platforms

      Before we talk game development, we should talk about the Android platform in general. As you know, games come in many different styles. Different styles of games need different design principles. At the start of your project, you usually decide the genre of your game. Unless you come up with something completely new and previously unseen, chances are high that your game idea fits into one of the broad genres currently popular. Most genres have established game mechanic standards (e.g., control schemes, specific goals, etc.). Deviating from these standards can make a game a great hit, as gamers always long for something new. Some of the common genres are:

      • Arcade & Action
      • Brain & Puzzle
      • Cards & Casino
      • Casual
      • Live Wallpaper
      • Racing
      • Sports Games
      • Widgets
      • etc

      The process for developing general Android games is similar to any other Android application. First, download the Android SDK and NDK from Google’s web site and install them properly.

      I assume you have done all the installation and preparation work. Using Cocos2d-x game engine as the example, let’s see how to create a game for Intel architecture.

      Download Cocos2D-x

      Download the latest stable version of Cocos2D-x from the web site: http://www.cocos2d-x.org/projects/cocos2d-x/wiki/Download

      Execute the batch

      Execute the batch from Windows Explorer. When it asks you for the project location, set it to something like com.yourproject.something, and choose the project name and target ID. This will create a folder with the project name inside the cocos2dx installation folder. You should see the execution of a script, without any error, something like this:

      Set Environment Variables of NDK_ROOT

      Add the following environment variable at the end of the home\<yourname>\.bash_profile file (in this case:c:\cygwin\home\user\.bash_profile):

      NDK_ROOT=/cygdrive/<yourname>/
      
      export NDK_ROOT
       

      restart cygwin,input cd $NDK_ROOT, and you should see this screen:

      Execute the build_native.sh file

      The default configuration is ARM; we need to change it to compile for x86. Open the file \helloworld\proj.android \build_native.sh, find ndk-build command, and add the APP_ABI=x86 parameter to the end of the command. Run it in Cygwin and you will see:

      Import project to Eclipse

      Now go to Eclipse, create a new project -> Import from existing project.

      Build and Run

      At this step, Eclipse will have some problems:

      The import org.cocos2dx.lib cannot be resolved HelloWorld.java

      /HelloWorld/src/com/young40/test line 26 Java Problem Cocos2dxActivity cannot be resolved to a type HelloWorld.java

      /HelloWorld/src/com/young40/test line 30 Java Problem Cocos2dxActivity cannot be resolved to a type HelloWorld.java

      /HelloWorld/src/com/young40/test line 33 Java Problem

      You must import the following library into Eclipse as a project:

      cocos2d-2.1beta3-x-2.1.1/cocos2dx/platform/android/java

      Go to Project -> Build, and then Run As -> Android Application:

      Then a game framework for the cocos2dx game engine will be built. You can add game logic, audio, picture, etc. resources to this project to make a full game.

      Optimize Android NDK-based games on IA platforms

      Intel® System Studio is a suite of tool for profiling and optimizing applications on Android platforms. Of course, we can use it for optimizing games. Intel System Studio includes:

      • Intel® C++ Compiler
      • Intel® Graphics Performance Analyzers
      • Intel® VTune Amplifier
      • (Intel® JTAG Debugger)

      Here we won’t explain the details about each tool. Instead, we will walk through an example that shows how Intel tools work.

      First, let’s take an application, called Bounding Ball, that we will run on an Intel® Atom™ Z2460 (code name Medfield) processor. The game has more than 800 balls that move at random speed and collide with each other without any regularity. We can see the performance is bad by measuring the FPS, which is only 6 without any optimization.

      We can use Intel® Graphics Performance Analyzers (Intel® GPA) to locate which module is the bottleneck and find out if it is CPU bound or GPU bound.

      The Intel GPA screen shot below shows a chart that describes the details of this application via the GPA on Android platform . From this, you can see that the CPU consumed 52.5% of the resources. That is a rather high ratio for one application. Meanwhile ISP Load, TA Load, TSP Load and USSE Total Load running inside GPU are all less than 10%, which means that the GPU load is normal. Thus we can conclude the bottleneck is in CPU module. To further analyze the CPU bottleneck issue, we need to profile the code using the VTune™ analyzer.

      Here, we don’t describe how to use the VTune analyzer, we just explain the results we obtained when we ran it. The hotspots are the sin and cos functions inside libm.so. So the question is: why does the application spend so much time and CPU cycles to run these two functions?

      By checking the application source code, we find that these two hotspot functions are called when every ball is rendered by OpenGL ES*. As geometry of all the balls is the same, only the size is different. We can duplicate balls using the OpenGL function glScale so that hotspot function can be decreased greatly.

      After code optimization, performance is improved 80%; the FPS is 14. Further, we can compile the application with Intel C/C++ Compiler to get better performance on Intel architecture platforms. The Intel C/C++ Compiler has many flags for performance optimization on IA platforms. Here we just introduce some of them.

      • SSSE3_ATOM
        Supplemental Streaming SIMD Extensions 3 (SSSE3 or SSE3S) is a SIMD instruction set created by Intel and is the fourth iteration of the SSE technology.
      • IPO
        Interprocedural Optimization flag will reduce function call overhead, eliminate dead code, and reorder constant propagation and procedure.
      • PGO
        Profile-Guided Optimizations flag will analyze leaves many questions open for the optimizer like:
        • How often is x > y
        • What is the size of count
        • Which code is touched and how often

      In addition, the Intel C/C++ Compiler can also enhance applications as follows:

      • More accurate branch prediction
      • Basic block movement to improve instruction cache behavior
      • Better decision of functions to inline (help IPO)
      • Better optimization of function ordering
      • Optimization of switch-statements
      • Better vectorization decisions

      Using different compilers and different compiling parameters, an app can get different performance. Here is a performance comparison of two compilers GCC and ICC. Same application Bounding Ball is running on android phone based on Intel Medfield. Blue part is performance of GCC and red part is that of ICC. Baseline is to compile without any parameters. The second part of chart is to compile with arch=atom. The third part is to recompile with all parameters mentioned above. Finally, you can see the performance of app compiled by ICC is 60% higher than GCC.

      Summary

      We’ve given you a quick introduction of Android game development and optimization in IA platforms. Game engines are the core part of all game development. If they run well on IA platforms, then the games will run well too. We took the popular game engine, cocos2dx, as an example to demonstrate how to develop on the IA platform. Intel also offers many tools for developers to optimize their game applications on Android platforms. Using Intel System Studio we showed the steps of how to optimize a demo application.

      About the Author

      Tao Peng is an application engineer in Intel Software and Service Group, and focus on mobile application enabling, including Android applications development and optimization for x86 devices, Web HTML5 application development.

    • Developers
    • Android*
    • Android*
    • Phone
    • URL
    • What's New? Intel® Threading Building Blocks 4.2

      $
      0
      0

      One of the best known C++ threading libraries Intel® Threading Building Blocks (Intel® TBB) was recently updated to a new release 4.2. The updated version contains several key new features comparing to previous release 4.1. Some of them were already released in TBB 4.1 updates.

      New synchronization primitive speculative_spin_mutex introduces support for speculative locking. This has become possible using Intel(R) Transactional Synchronization Extensions (Intel® TSX) hardware feature available in 4th generation Intel® Core™ processors. On processors that support hardware transactional memory (like Intel® TSX) speculative mutexes work by letting multiple threads acquire the same lock, as long as there are no "conflicts" that may generate different results than non-speculative locking. So no serialization happens in non-contended cases. This may significantly improve performance and scalability for “short” critical sections. If there is no hardware support for transactional synchronization, speculative mutexes behave like their non-speculating counterparts, but possibly with worse performance.

      Intel TBB now supports exact exception propagation feature (based on C++11 exception_ptr). With exception_ptr, exception objects can be safely copied between threads. This brings flexibility in exception handling in multithreaded environment. Now exact exception propagation is available in prebuilt binaries for all platforms: OS X*, Windows* and Linux*. On OS X* there are two sets of binaries: first is linked with gcc standard library – it used by default and doesn’t support exact exception propagation. To use the feature you should take the second set of binaries linked with libc++, the C++ standard library in Clang. To use these, set up the Intel TBB environment and build your application in the following way:

      # tbbvars.sh libc++
      # clang++ -stdlib=libc++ -std=c++11 concurrent_code.cpp -ltbb

      In addition to concurrent_unordered_set and concurrent_unordered_map containers, we now porvide concurrent_unordered_multiset and concurrent_unordered_multimap based on Microsoft* PPL prototype.  concurrent_unordered_multiset provides ability to insert an item more than once, that is not possible in  concurrent_unordered_set. Similarly, concurrent_unordered_multimap allows to insert more than one <key,value> pair with the same key value. For the both “multi” containersfindwill return the first item (or <key,value> pair ) in the table with a matching search key.

      Intel TBB containers can now be conveniently initialized with value lists as specified by C++ 11 (initializer lists):

      tbb::concurrent_vector<int> v ({1,2,3,4,5} );

      Currently initialize lists are supported by the following containers:

      concurrent_vector
      concurrent_hash_map
      concurrent_unordered_set
      concurrent_unordered_multiset
      concurrent_unordered_map
      concurrent_unordered_multimap
      concurrent_priority_queue

      Scalable memory allocator has caches for allocated memory in each thread. This is done for sake of performance, but often at the cost of increased memory usage. Although the memory allocator tries hard to avoid excessive memory usage, for complex cases Intel TBB 4.2 gives more control to the programmer: it is now possible to reduce memory consumption by cleaning thread caches with scalable_allocation_command() function. There were also made several improvements in overall allocator performance.

      Intel TBB library is widely used on different platforms. Mobile developers can now find prebuilt binary files for Android in the Linux* OS package. Binary files for Windows Store* applications were added to the Windows* OS package.

      Atomic variables tbb::atomic<T> now have constructors when used in C++11. This allows programmer to value-initialize them on declaration, with const expressions properly supported. Currently this works for gcc and Clang compilers:

      tbb::atomic<int> v=5;
      

      The new community preview feature allows waiting until all worker threads terminate. This may be needed if applications forks processes, or TBB dynamic library can be unloaded in runtime (e.g. if TBB is a part of a plugin). To enable waiting for workers initialize task_scheduler_init object this way:

      #define TBB_PREVIEW_WAITING_FOR_WORKERS 1
      tbb::task_scheduler_init scheduler_obj (threads, 0, /*wait_workers=*/true);

      Find the new TBB 4.2 from commercial and open source sites. Download and enjoy the new functionality!

       

    • tbb
    • Developers
    • Android*
    • Apple Mac OS X*
    • Linux*
    • Microsoft Windows* (XP, Vista, 7)
    • Microsoft Windows* 8
    • Android*
    • Windows*
    • C/C++
    • Advanced
    • Intermediate
    • Intel® C++ Composer XE
    • Intel® Threading Building Blocks
    • Laptop
    • Phone
    • URL
    • Libraries
    • Multithread development
    • Building Android* NDK applications with Intel® IPP

      $
      0
      0


      Intel IPP provides highly optimized building block functions for image processing, signal processing, vector math and small matrix computation. Several IPP domains contain the hand-tuned functions for Intel(R) Atom™ processor by taking advantage of Intel® Streaming SIMD Extensions (Intel® SSE) instructions.  The IPP static non-threaded Linux* libraries now support Android* OS, and can be used with Android applications.

      This article gives an introduction on how to add Intel IPP functions into Android NDK applications. Intel IPP provides processor-specific optimization, and only can be linked with native Android C/C++ code.  To use Intel IPP with your application, you need to include Intel IPP functions in your source code, and you also need to add IPP libraries into the building command line.

      Using Intel IPP

      1. Adding Intel IPP functions in source

      • In source files, include the Intel IPP header files (ipp.h)
      • Call ippInit() before using any other IPP functions.  Intel IPP detects the processor features and selects the optimizing code path for the target processors.  Before calling any other Intel IPP functions, call the ippInit() to initialize the CPU dispatching for Intel IPP.
      • Call Intel IPP functions in your C/C++ source.

      2. Including Intel IPP libraries into the Android NDK building files

      • Copy Intel IPP libraries and headers to your project folder.
      • Find Intel libraries required for the application: Intel IPP libraries are categorized into different domains. Each domain has its own library, and some domain libraries depend on other ones. It needs to include all domain libraries and their dependency ones into the linkage line.  Check the “Intel IPP Library Dependencies” article to learn required Intel IPP libraries.
      • Add the IPP libraries to android building script file “jni/Android.mk”:
        Declare each IPP library as the prebuilt library module. For example, if the application uses two Intel IPP libraries "libipps.a" and "libippcore.a", add the following into the file:

      include $(CLEAR_VARS)
      LOCAL_MODULE := ipps
      LOCAL_SRC_FILES := ../ipp/lib/ia32/libipps.a
      include $(PREBUILT_STATIC_LIBRARY)

       

      include $(CLEAR_VARS)
      LOCAL_MODULE := ippcore
      LOCAL_SRC_FILES := ../ipp/lib/ia32/libippcore.a
      include $(PREBUILT_STATIC_LIBRARY) 

       

      Add the header path and IPP libraries into the modules calling IPP functions:

        
      include $(CLEAR_VARS)
      LOCAL_MODULE     := IppAdd
      LOCAL_SRC_FILES  := IppAdd.c
      LOCAL_STATIC_LIBRARIES := ipps ippcore
      LOCAL_C_INCLUDES := ./ipp/include
      include $(BUILT_SHARED_LIBRARY)

      Building one sample code

      A simple example is included bellow that shows Intel IPP usage in the native Android code. The code uses Intel IPP ippsAdd_32f()function to add data for two arrays. 

      To review Intel IPP usage in the code:

      1. Download the sample code and unpack it to your project folder(<projectdir>).
      2. Learn IPP usage in the source files: The "jni/IppAdd.c" file provides the implementation of one native function NativeIppAdd(). The function call Intel IPP ippsAdd_32f() function.  The "src/com/example/testippadd/ArrayAddActivity.java" file call the native "NativeIppAdd()" function through JNI.
      3. Check "jni/Andriod.mk" file. This file adds the required IPP libraries into the building script. The sample uses ippsAdd_32f()function, which is belong to Intel IPP signal processing domain. The function depends on "libipps.a" and "libippcore.a" libraries.  The "Andriod.mk" file creates two prebuilt libraries for them. 

      You can build the sample code either using the SDK and NDK command tools or using Eclipse* IDE

          Build the sample from a command line

      1. Copy the Intel IPP headers and libraries into your project folder (e.g. <projectdir>/ipp). 
      2. Run the "ndk-build" script from your project's directory to build the native code
         >cd  <projectdir> 
         ><ndkdir>/ndk-build
      3. Build android package and install the application
        >cd <projectdir>
        >android update project -p  . -s
        >ant debug
        >adb install bin/ArrayAddActivity-debug.apk

           Build the sample by Eclipse* IDE

      1. Copy the Intel IPP headers and libraries into your project folder (e.g. <projectdir>/ipp).
      2. In Eclipse, click File >> New >> Project...>>Andriod>> Andriod Project from Existing Code.  In the  "Root Directory", select the sample code folder,  then click Finish.
      3. Run the 'ndk-build' script from your project's directory to build the native code: 
        >cd <projectdir> 
        ><ndkdir>/ndk-build
      4. Build the application in the Eclipse IDE and deploy the .apk file.

      Summary
      This article provide the introduction on IPP usage with the native Android* applications. Check more information on Intel IPP functions in the IPP manual.


       

    • SSE4.2
    • ndk
    • Developers
    • Android*
    • Android*
    • Java*
    • Intel® Integrated Performance Primitives
    • Intel® Streaming SIMD Extensions
    • Intel® Atom™ Processors
    • Phone
    • Tablet
    • URL
    • Code Sample
    • Getting started
    • Libraries

    • Intel® HTML5 Tools for developing mobile applications

      $
      0
      0

      by Egor Churaev

      Downloads


      Intel® HTML5 Tools for developing mobile applications [PDF 821.98KB]
      iOS Source Code[ZIP file 168 KB]
      HTML5 Tools Result Source Code[ZIP file 86KB]

      HTML5 is the new HTML standard. Recently, Intel Corporation announced a set of HTML5 Tools for developing mobile applications. This paper shows you how to port an Apple iOS* accelerometer app to HTML5 using these tools. Please note: Auto-generated code created by the XDK may contain code licensed under one or more of the licenses detailed in Appendix A of this document.  Please refer to the XDK output for details on which libraries are used to enable your application.

      Intel® HTML5 App Porter Tool


      The first thing we’ll do is take an iOS accelerometer app and convert the Objective-C*source code to HTML5. We’ll do this using the Intel® HTML5 App Porter Tool and the source code found here: [iOS_source.zip] (Note: IOS_source sample code is provided under the Intel Sample Software License detailed in Appendix B).You can download the Intel HTML5 App Porter Tool from the Tools tab here:  http://software.intel.com/en-us/html5. After filling in and submitting the form with your e-mail address, you will get links for downloading this tool. The instructions for how to use this tool can be found on this site http://software.intel.com/en-us/articles/tutorial-creating-an-html5-app-from-a-native-ios-project-with-intel-html5-app-porter-tool.

      When you are finished performing all the steps, you will get HTML5 source code.

      Intel® XDK


      You can open the HTML5 code in any IDE. Intel offers you a convenient tool for developing HTML5 applications: Intel® XDK – Cross platform development kit (http://html5dev-software.intel.com/). With Intel XDK, developers can write a single source code for deployment on many devices. What is particularly good is it is not necessary to install it on your computer. You can install it as an extension for Google Chrome*. If you use another browser, you have to download a JavaScript* file and run it. Sometimes it’s necessary to update Java*.

      After installing Intel XDK, you will see the main window:

      If you want to port existing code, press the big “Start new” button.

      If you’re creating a new project, enter the Project Name and check “Create your own from scratch,” as shown in the screen shot below.

      Check “Use a blank project.” Wait a bit, and you will see the message “Application Created Successfully!”

      Click “Open project folder.”

      Remove all files from this folder and copy the ported files. We haven’t quite ported the accelerometer app yet. We still have to write an interface for it. It is possible to remove the hooks created by the Intel HTML5 App Porter tool. Remove these files:

      • todo_api_application__uiaccelerometerdelegate.js
      • todo_api_application_uiacceleration.js
      • todo_api_application_uiaccelerometer.js
      • todo_api_js_c_global.js

      To update the project in Intel XDK, go to the editor window in the Windows emulator.

      Open the index.html file and remove the lines left from the included files.

      Open the todo_api_application_appdelegate.js fileand implement the unmapped “window” property of the delegate.

      application.AppDelegate.prototype.setWindow = function(arg1) {
          // ================================================================
          // REFERENCES TO THIS FUNCTION:
          // line(17): C:WorkBloggingechuraevAccelerometerAccelerometerAppDelegate.m
          //    In scope: AppDelegate.application_didFinishLaunchingWithOptions
          //    Actual arguments types: [*js.APT.View]
          //    Expected return type: [unknown type]
          //
          //if (APT.Global.THROW_IF_NOT_IMPLEMENTED)
          //{
              // TODO remove exception handling when implementing this method
             // throw "Not implemented function: application.AppDelegate.setWindow";
          //}
      this._window = arg1;
      };
      
      application.AppDelegate.prototype.window = function() {
          // ================================================================
          // REFERENCES TO THIS FUNCTION:
          // line(20): C:WorkBloggingechuraevAccelerometerAccelerometerAppDelegate.m
          //    In scope: AppDelegate.application_didFinishLaunchingWithOptions
          //    Actual arguments types: none
          //    Expected return type: [unknown type]
          //
          // line(21): C:WorkBloggingechuraevAccelerometerAccelerometerAppDelegate.m
          //    In scope: AppDelegate.application_didFinishLaunchingWithOptions
          //    Actual arguments types: none
          //    Expected return type: [unknown type]
          //
          //if (APT.Global.THROW_IF_NOT_IMPLEMENTED)
          //{
              // TODO remove exception handling when implementing this method
             // throw "Not implemented function: application.AppDelegate.window";
          //}
      return this._window;
      };
      

      Open the viewcontroller.js file. Remove all the functions used for working with the accelerometer in the old iOS app. In the end we get this file:

      APT.createNamespace("application");
      
      document.addEventListener("appMobi.device.ready",onDeviceReady,false);
      
      APT.ViewController = Class.$define("APT.ViewController");
      
      application.ViewController = Class.$define("application.ViewController", APT.ViewController, {
          __init__: function() {
              this.$super();
          };>});
      In the ViewController_View_774585933.css file, we have to change styles of element colors from black to white to be readable on the black background: color: rgba(0,0,0,1); à  color: rgba(256,256,256,1);. As a result we get:
      div#Label_590244915
      {
      	left: 20px;
      	color: rgba(256,256,256,1);
      	height: 21px;
      	position: absolute;
      	text-align: left;
      	width: 320px;
      	top: 0px;
      	opacity: 1;
      }
      div#Label_781338720
      {
      	left: 20px;
      	color: rgba(256,256,256,1);
      	height: 21px;
      	position: absolute;
      	text-align: left;
      	width: 42px;
      	top: 29px;
      	opacity: 1;
      }
      div#Label_463949782
      {
      	left: 20px;
      	color: rgba(256,256,256,1);
      	height: 21px;
      	position: absolute;
      	text-align: left;
      	width: 42px;
      	top: 51px;
      	opacity: 1;
      }
      div#Label_817497855
      {
      	left: 20px;
      	color: rgba(256,256,256,1);
      	height: 21px;
      	position: absolute;
      	text-align: left;
      	width: 42px;
      	top: 74px;
      	opacity: 1;
      }
      div#Label_705687206
      {
      	left: 70px;
      	color: rgba(256,256,256,1);
      	height: 21px;
      	position: absolute;
      	text-align: left;
      	width: 42px;
      	top: 29px;
      	opacity: 1;
      }
      div#Label_782673145
      {
      	left: 70px;
      	color: rgba(256,256,256,1);
      	height: 21px;
      	position: absolute;
      	text-align: left;
      	width: 42px;
      	top: 51px;
      	opacity: 1;
      }
      div#Label_1067317462
      {
      	left: 70px;
      	color: rgba(256,256,256,1);
      	height: 21px;
      	position: absolute;
      	text-align: left;
      	width: 42px;
      	top: 74px;
      	opacity: 1;
      }
      div#View_774585933
      {
      	left: 0px;
      	height: 548px;
      	position: absolute;
      	width: 320px;
      	top: 20px;
      	opacity: 1;
      }
      

      After updating the emulator window, you see:

      To code the accelerometer functions, we need to use the appMobi JavaScript Library. Documentation for this library can be found here. It’s installed when you download Intel XDK.

      Open the index.html file and add this line into the list of scripts:

      <script type="text/javascript" charset="utf-8" src="http://localhost:58888/_appMobi/appmobi.js"></script>

      Open the ViewController_View_774585933.html file. We have to rename fields to more logical names from:

      <div data-apt-class="Label" id="Label_705687206">0</div>
      <div data-apt-class="Label" id="Label_782673145">0</div>
      <div data-apt-class="Label" id="Label_1067317462">0</div>
      

      to:

      <div data-apt-class="Label" id="accel_x">0</div>
      <div data-apt-class="Label" id="accel_y">0</div>
      <div data-apt-class="Label" id="accel_z">0</div>
      

      The same should be done in the ViewController_View_774585933.css file, where we have to rename the style names.

      Open the viewcontroller.js file and write some functions for using the accelerometer.

      function suc(a) {
          document.getElementById('accel_x').innerHTML = a.x;
          document.getElementById('accel_y').innerHTML = a.y;
          document.getElementById('accel_z').innerHTML = a.z;
      }
      
      var watchAccel = function () {
          var opt = {};
          opt.frequency = 5;
          timer = AppMobi.accelerometer.watchAcceleration(suc, opt);
      }
      
      function onDeviceReady() {
          watchAccel();
      }
      document.addEventListener("appMobi.device.ready",onDeviceReady,false);
      

      Update the project, and you can see it on the emulator window:

      You can see how the accelerometer works on Intel XDK using the “ACCELEROMETER” panel:

      The application will look like this:

      The complete application source code can be found here.
      Appendix A: Intel ®XDK Code License Agreements

      Appendix B: Intel Sample Source Code License Agreement

      Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
      Copyright © 2013 Intel Corporation. All rights reserved.
      *Other names and brands may be claimed as the property of others.

    • accelerometer
    • Intel® XDK
    • Developers
    • Android*
    • Apple iOS*
    • Android*
    • HTML5
    • Windows*
    • Beginner
    • Mobility
    • Laptop
    • Phone
    • URL
    • GA Tech 2013 Code for Good Student Hackathon

      $
      0
      0

      For the past 24 hours, we have held the 2nd GA Tech Code for Good Student Hackathon.  In continuation of the previous event, we retained the theme of teaching healthy lifestyle choices to combat childhood obesity.  From edutainment to exercise games, we seek to create worthwhile projects that can help an at-risk demographic: our future.

      With Intel providing the food and Android tablets, the students have been working non-stop on these beneficial games.  Our host at Georgia Tech is Professor Matthew Wolf.  Special guest Cornelia Davis from Pivotal labs joined us to share her expertise on Cloud Foundry, with which the students have hosted and distributed their software. 

      Variations on the Theme

      From the previous hackathon on this subject, domain experts share insights:

      From Healthier Generations -

       Are there technologies that solve similar problems?

      Perhaps you’re inspired by a feature of another piece of technology such as an app on your phone, or an online service. Do you know of other technologies that solve similar problems, or solve a problem in a similar way to what you imagine?

      Two apps that do some of the things that we think are important are Instagram* and WebMD*. 

      Instagram - people can take photos, put them on a map and connect with others through images. In case of childhood obesity, they could take photos and/or map comments about their environment as it relates to access to healthy food and safe places for physical activity. 

      WebMD* - similar to how WebMD identifies symptoms and treatments, we would like to offer questions about a person’s environment and help them identify solutions in their environment.

      From Dr. Marks

      1) the most important thing is to get people moving.  Hopefully walking, but at least moving.  Games that require and reward the kids to actually walk to move the character through the game would be great.

      2) Nutrition that not only rates meals, but also allows them to have nutrition information in an understandable format, relevant to school lunches, would also be good.  The overwhelming majority of school foods in this country are provided by a single company, so this is do-able.  It does need to be fun, or kids won't do it.  You can also take advantage of the cameras that most cell phones have these days.  Is there any way to photograph a school lunch and cross reference it with the known inventory of the company supplying the food?  Could you have some kind of reference item of known shape and size that gets photographed with the food so that portion sizes can be estimated?

      3) knowledge is power.  Kids that know where their food came from make better choices.  How many kids know, for example, that ketchup is mostly high fructose corn syrup?  Do they even know what a tomato is?

      4) kids do in fact educate and pressure their parents in very meaningful ways. The question is how to build in motivation and reward on both sides.

      5) is there any way to turn a standard phone into a pedometer?  Can you track how much a child moved so that appropriate rewards can be offered?

      6) improving our ability to move through the built environment is key.

      There are many map programs that calculate driving routes. Is it possible to calculate the best/safest walking or biking route?

      Close to the End

      As we enter the home stretch, the teams rush to finish their demos.  Pictures and project source will follow soon.  

    • Code for Good
    • hackathon
    • GA Tech
    • healthy living
    • html5
    • javascript
    • cloud foundry
    • pivotal labs
    • Icon Image: 

    • Event
    • Cloud Computing
    • Game Development
    • HTML5
    • JavaScript*
    • Android*
    • Cloud Services
    • Code for Good
    • HTML5
    • Tablet
    • Developers
    • Professors
    • Students
    • Android*
    • Participe do segundo AppLab Android* Intel - DevFest São Paulo

      $
      0
      0

    • Developers
    • Android*
    • Android*
    • URL
    • Inscreva-se no Segundo AppLab Android* Intel - DevFest São Paulo

      $
      0
      0

      Intel Android AppLab

      A Intel está convidando os melhores desenvolvedores e empresas para participar do segundo AppLab Intel Android que acontecerá no dia 23 de novembro (sábado) durante o DevFest São Paulo que será realizado na rua Martins Fontes, 330.

      Venha descobrir como extrair o melhor de suas Apps utilizando tecnologias Intel!


      O que é um AppLab?


      O AppLab é um treinamento técnico teórico e prático dirigido pelo Community Manager de Android da Intel. Além de abordar os principais tópicos de desenvolvimento, depuração e performance, o AppLab explora as ferramentas e técnicas que os desenvolvedores Android precisam para tirar o máximo de performance de suas Apps e manter compatibilidade com o maior número possível de dispositivos no mercado.

      Pré-requisitos necessários


      Antes de se inscrever, veja os pré-requisitos para o bom aproveitamento do AppLab: 

      • Preferência para quem já publicou App no Google Play ou está em vias de publicar;
      • Experiência no desenvolvimento de aplicação Android;
      • Desejável conhecimento em Android NDK; 

      Por que participar?


      Os AppLabs Android da Intel são destinados a auxiliar os desenvolvedores brasileiros tanto tecnicamente quanto através de ações de marketing com o intuito de ajudar as empresas a promover e elevar a sua visibilidade no mercado de aplicativos móveis. Além disto, a Intel vai garantir o ingresso dos participantes do AppLab para o DevFest sem custos.

      Após o Applab a Intel irá selecionar os melhores aplicativos NDK com suporte nativo à arquitetura x86 a receber um pacote de serviços de marketing com um valor avaliado de U$ 5.000,00 mais a criação de um Press Release dentro do PRNewswire mundial da Intel.

      Como participar


      Clique no link abaixo para realizar sua inscrição para o AppLab Andoid* Intel:
      http://software.intel.com/pt-br/articles/participe-do-segundo-applab-android-intel-devfest-s-o-paulo

      Este AppLab tem um limite de aproximadamente 30 participantes. Sugerimos que os participantes inscritos tragam seu computador com o ambiente de desenvolvimento Android configurado.

      Agenda


      14:00: Introdução às Ferramentas Intel para Android
      14:30: Desenvolvendo Apps NDK para Android 
      15:00: Fat binaries, Multiple APKs & Google Play
      15:30:
      Porting Lab: Traga seu código e vamos adicionar suporte a x86 juntos

      *Algumas alterações podem ocorrer na agenda.

      Acesse os links abaixo e vá se preparando para o AppLab:

    • android; developer event; desenvolvedores
    • Icon Image: 

    • C/C++
    • Java*
    • Android*
    • Phone
    • Tablet
    • Developers
    • Partners
    • Android*
    • SES 2013 Hackathon

      $
      0
      0

      SES 2013 Hackathon Signage

      Offering the flexibility of JavaScript (through the Intel XDK) or the power of native Java code, developers of all skill levels converged at SES 2013 to learn more about developing for Android platforms.  

      The turnout- around 10 people- was less than expected; this allowed us to work with the participants on a more personal level, tailoring our dynamic presentation on the fly and solving individual issues. 

      The main consideration that we overlooked in designing the content was that of our demographic.  Since my background is in organizing and running student hackathons, the participants had a drastically different mindset than I was used to seeing: they cared less about the final product and more about the journey.  While this can sometimes be true of students, it is almost universal for engineers exploring new tools.  The long duration (8 hours spread over 3 time slots) was not conducive to high attendance, especially when each section of the event was concurrent with other great sessions (I was sad to miss out on some of them myself due to the scheduling).  

      Next year we plan to refocus the event, clarifying the target and shaping the scope to better suit the participants' needs.  

    • hackathon
    • SES
    • xdk
    • html5
    • javascript
    • java
    • haxm
    • android
    • ndk
    • Icon Image: 

    • Event
    • Development Tools
    • Intel Hardware Accelerated Execution Manager (HAXM)
    • Intel® XDK
    • HTML5
    • Java*
    • JavaScript*
    • Android*
    • HTML5
    • Phone
    • Developers
    • Android*
    • Android 开发之多线程处理、Handler 详解

      $
      0
      0

      Android开发过程中为什么要多线程

      我们创建的Service、Activity以及Broadcast均是一个主线程处理,这里我们可以理解为UI线程。但是在操作一些耗时操作时,比如I/O读写的大文件读写,数据库操作以及网络下载需要很长时间,为了不阻塞用户界面,出现ANR的响应提示窗口,这个时候我们可以考虑使用Thread线程来解决。

        Android中使用Thread线程会遇到哪些问题

      对于从事过J2ME开发的程序员来说Thread比较简单,直接匿名创建重写run方法,调用start方法执行即可。或者从Runnable接口继承,但对于Android平台来说UI控件都没有设计成为线程安全类型,所以需要引入一些同步的机制来使其刷新,这点Google在设计Android时倒是参考了下Win32的消息处理机制。

      postInvalidate()方法

      对于线程中的刷新一个View为基类的界面,可以使用postInvalidate()方法在线程中来处理,其中还提供了一些重写方法比如postInvalidate(int left,int top,int right,int bottom) 来刷新一个矩形区域,以及延时执行,比如postInvalidateDelayed(long delayMilliseconds)postInvalidateDelayed(long delayMilliseconds,int left,int top,int right,int bottom) 方法,其中第一个参数为毫秒,如下:

      void

      postInvalidate()

      void

      postInvalidate(int left, int top, int right, int bottom)

      void

      postInvalidateDelayed(long delayMilliseconds)

      void

      postInvalidateDelayed(long delayMilliseconds, int left, int top, int right, int bottom)

      Handler

      当然推荐的方法是通过一个Handler来处理这些,可以在一个线程的run方法中调用handler对象的postMessagesendMessage方法来实现,Android程序内部维护着一个消息队列,会轮训处理这些,如果你是Win32程序员可以很好理解这些消息处理,不过相对于Android来说没有提供PreTranslateMessage这些干涉内部的方法。

      消息的处理者,handler负责将需要传递的信息封装成Message,通过调用handler对象的obtainMessage()来实现。将消息传递给Looper,这是通过handler对象的sendMessage()来实现的。继而由LooperMessage放入MessageQueue中。Looper对象看到MessageQueue中含有Message,就将其广播出去。该handler对象收到该消息后,调用相应的handler对象的handleMessage()方法对其进行处理。

      Handler主要接受子线程发送的数据,并用此数据配合主线程更新UI.
            
      当应用程序启动时,Android首先会开启一个主线程 (也就是UI线程) , 主线程为管理界面中的UI控件,进行事件分发,比如说,你要是点击一个 Button ,Android会分发事件到Button上,来响应你的操作。  如果此时需要一个耗时的操作,例如:联网读取数据, 或者读取本地较大的一个文件的时候,你不能把这些操作放在主线程中,,如果你放在主线程中的话,界面会出现假死现象,如果5秒钟还没有完成的话,,会收到Android系统的一个错误提示  "强制关闭".  这个时候我们需要把这些耗时的操作,放在一个子线程中,因为子线程涉及到UI更新,,Android主线程是线程不安全的,也就是说,更新UI只能在主线程中更新,子线程中操作是危险的.这个时候,Handler就出现了,来解决这个复杂的问题由于Handler运行在主线程中(UI线程中),  它与子线程可以通过Message对象来传递数据,这个时候,Handler就承担着接受子线程传过来的(子线程用sedMessage()方法传弟)Message对象,(里面包含数据)  ,把这些消息放入主线程队列中,配合主线程进行更新UI


      Handler
      一些特点:handler可以分发Message对象和Runnable对象到主线程中,每个Handler实例,都会绑定到创建他的线程中(一般是位于主线程),
            
      它有两个作用: (1)安排消息或Runnable在某个主线程中某个地方执行

                                 (2)安排一个动作在不同的线程中执行
              Handler
      中分发消息的一些方法
              post(Runnable)
              postAtTime(Runnable,long)
              postDelayed(Runnable long)
              sendEmptyMessage(int)
              sendMessage(Message)
              sendMessageAtTime(Message,long)
              sendMessageDelayed(Message,long)
            
      以上post类方法允许你排列一个Runnable对象到主线程队列中,sendMessage类方法,允许你安排一个带数据的Message对象到队列中,等待更新.

      Handler实例
          // 
      子类需要继承Hendler类,并重写handleMessage(Message msg) 方法,用于接受线程数据
           // 
      以下为一个实例,它实现的功能 :通过线程修改界面Button的内容

      public class MyHandlerActivity extends Activity {

          Button button;

          MyHandler myHandler;

          protected void onCreate(Bundle savedInstanceState) {

              super.onCreate(savedInstanceState);

              setContentview(R.layout.handlertest);

              button = (Button) findViewById(R.id.button);

              myHandler = new MyHandler();

              //当创建一个新的Handler实例时,它会绑定到当前线程和消息的队列中,开始分发数据

              // Handler有两个作用, (1) :定时执行MessageRunnalbe对象

              // (2):让一个动作,在不同的线程中执行.

              //它安排消息,用以下方法

              // post(Runnable)

              // postAtTime(Runnable,long)

              // postDelayed(Runnable,long)

              // sendEmptyMessage(int)

              // sendMessage(Message);

              // sendMessageAtTime(Message,long)

              // sendMessageDelayed(Message,long)

              //以上方法以 post开头的允许你处理Runnable对象

              //sendMessage()允许你处理Message对象(Message里可以包含数据,)

              MyThread m = new MyThread();

              new Thread(m).start();

          }

          /**

           *接受消息,处理消息 ,Handler会与当前主线程一块运行

           * */

          class MyHandler extends Handler {

              public MyHandler() {

              }

              public MyHandler(Looper L) {

                  super(L);

              }

              //子类必须重写此方法,接受数据

              @Override

              public void handleMessage(Message msg) {

                  // TODO Auto-generated method stub

                  Log.d("MyHandler", "handleMessage......");

                  super.handleMessage(msg);

                  //此处可以更新UI

                  Bundle b = msg.getData();

                  String color = b.getString("color");

                  MyHandlerActivity.this.button.append(color);

              }

          }

          class MyThread implements Runnable {

              public void run() {

                  try {

                      Thread.sleep(10000);

                  } catch (InterruptedException e) {

                      // TODO Auto-generated catch block

                      e.printStackTrace();

                  }

                  Log.d("thread.......", "mThread........");

                  Message msg = new Message();

                  Bundle b = new Bundle();//存放数据

                  b.putString("color", "我的");

                  msg.setData(b);

                  MyHandlerActivity.this.myHandler.sendMessage(msg); //Handler发送消息,更新UI

              }

          }
      }

        Looper

      其实Android中每一个Thread都跟着一个LooperLooper可以帮助Thread维护一个消息队列,昨天的问题 Can't create handler inside thread 错误 一文中提到这一概念,但是LooperHandler没有什么关系,我们从开源的代码可以看到Android还提供了一个Thread继承类HanderThread可以帮助我们处理,在HandlerThread对象中可以通过getLooper方法获取一个Looper对象控制句柄,我们可以将其这个Looper对象映射到一个Handler中去来实现一个线程同步机制,Looper对象的执行需要初始化Looper.prepare方法就是昨天我们看到的问题,同时推出时还要释放资源,使用Looper.release方法。

      LooperMessageQueue的管理者。每一个MessageQueue都不能脱离Looper而存在,Looper对象的创建是通过prepare函数来实现的。同时每一个Looper对象和一个线程关联。通过调用Looper.myLooper()可以获得当前线程的Looper对象 
      创建一个Looper对象时,会同时创建一个MessageQueue对象。除了主线程有默认的Looper,其他线程默认是没有MessageQueue对象的,所以,不能接受Message。如需要接受,自己定义一个Looper对象(通过prepare函数),这样该线程就有了自己的Looper对象和MessageQueue数据结构了。 
      Looper
      MessageQueue中取出Message然后,交由HandlerhandleMessage进行处理。处理完成后,调用Message.recycle()将其放入Message Pool中。

      Message

      对于AndroidHandler可以传递一些内容,通过Bundle对象可以封装StringInteger以及Blob二进制对象,我们通过在线程中使用Handler对象的    sendEmptyMessagesendMessage方法来传递一个Bundle对象到Handler处理器。对于Handler类提供了重写方法handleMessage(Message msg) 来判断,通过msg.what来区分每条信息。将Bundle解包来实现Handler类更新UI线程中的内容实现控件的刷新操作。相关的Handler对象有关消息发送sendXXXX相关方法如下,同时还有postXXXX相关方法,这些和Win32中的道理基本一致,一个为发送后直接返回,一个为处理后才返回。

      Message:消息对象,Message Queue中的存放的对象。一个Message Queue中包含多个Message Message实例对象的取得,通常使用Message类里的静态方法obtain(),该方法有多个重载版本可供选择;它的创建并不一定是直接创建一个新的实例,而是先从Message Pool(消息池)中看有没有可用的Message实例,存在则直接取出返回这个实例。如果Message Pool中没有可用的Message实例,则才用给定的参数创建一个Message对象。调用removeMessages()时,将MessageMessage Queue中删除,同时放入到Message Pool中。除了上面这种方式,也可以通过Handler对象的obtainMessage()获取一个Message实例。

      final boolean

      sendEmptyMessage(int what)

      final boolean

      sendEmptyMessageAtTime(int what, long uptimeMillis)

      final boolean

      sendEmptyMessageDelayed(int what, long delayMillis)

      final boolean

      sendMessage(Message msg)

      final boolean

      sendMessageAtFrontOfQueue(Message msg)

      boolean

      sendMessageAtTime(Message msg, long uptimeMillis)

      final boolean

      sendMessageDelayed(Message msg, long delayMillis)

      MessageQueue

      是一种数据结构,见名知义,就是一个消息队列,存放消息的地方。每一个线程最多只可以拥有一个MessageQueue数据结构。 
      创建一个线程的时候,并不会自动创建其MessageQueue。通常使用一个Looper对象对该线程的MessageQueue进行管理。主线程创建时,会创建一个默认的Looper对象,而Looper对象的创建,将自动创建一个Message Queue。其他非主线程,不会自动创建Looper,要需要的时候,通过调用prepare函数来实现。
       
      java.util.concurrent对象分析

      对于过去从事Java开发的程序员不会对Concurrent对象感到陌生吧,他是JDK 1.5以后新增的重要特性作为掌上设备,我们不提倡使用该类,考虑到Android为我们已经设计好的Task机制,我们这里Android开发网对其不做过多的赘述。

      Task以及AsyncTask

      Android中还提供了一种有别于线程的处理方式,就是Task以及AsyncTask,从开源代码中可以看到是针对Concurrent的封装,开发人员可以方便的处理这些异步任务。 当然涉及到同步机制的方法和技巧还有很多,考虑时间和篇幅问题不再做过多的描述。

    • Curated Home
    • Icon Image: 

    • Game Development
    • Java*
    • Android*
    • Developers
    • Students
    • Android*
    • hasStreaming Property

      $
      0
      0

      This property indicates whether streaming has been enabled for this application

      intel.xdk.device.hasStreaming

      Description:

      This property indicates whether streaming has been enabled for this application. Functions under intel.xdk.player for station and shoutcast will not be available if this is false.

      Example:

      
      alert(intel.xdk.device.hasStreaming);
                   

      Version:

      This property is available in appMobi Version 3.0.0

      English

      pause Event

      $
      0
      0

      This event is fired when the screen locks

      Description:

      This event is triggered when the screen turns off due to power saving timeout or the user presses the power button.

      Example:

      
      document.addEventListener("intel.xdk.device.pause",function(evt){
              intel.xdk.player.pause();
      },false); 
                                

      Version:

      This event is available in appMobi Version 3.4.0

      English

      hasCaching Property

      $
      0
      0

      This property says whether caching has been enabled for this application

      AppMobi.device.hasCaching

      Description:

      This property says whether caching has been enabled for this application. Functions under AppMobi.cache for mediacache will not be available if this is false.

      Example:

      
      alert(AppMobi.device.hasCaching);
                   

      Version:

      This property is available in appMobi Version 3.0.0

      English

      closeRemoteSite Method

      $
      0
      0

      Call this command to force a remote site opened with showRemoteSite or showRemoteSiteExt to close.

      intel.xdk.device.closeRemoteSite();

      Available Platforms:

      Example:

       
      function onDeviceReady(){
              try
              {
                      intel.xdk.device.mainViewExecute('intel.xdk.cache.setCookie("remoteSiteCookie","'+iCookieValue+'",-1);')
                      intel.xdk.device.closeRemoteSite();
              } catch(e) {
                      console.log("oops "+e);
              }
      }
                                

      Version:

      This method is available in appMobi Version 3.0.0

      English

      showRemoteSite Method

      $
      0
      0

      This function is used to show a remote web site in a different web view.

      intel.xdk.device.showRemoteSite(url, closeImageX, closeImageY, closeImageWidth, closeImageHeight)

      Description:

      This function is used to show a remote web site in a different web view. Touching the close image will shut down the web view and return the user to the normal application view.

      The url parameter is for the new view’s target url. The image coordinates define the position, width, and height of the close image that the user may touch to close the web view. By default close image is set to 48x48 pixels and positioned in the upper left hand corner of the screen.

      When the close button is touched, it fires an intel.xdk.device.remote.close event.

      Available Platforms:

      Parameters:

      • url: The URL for the web view to open.
      • closeImageX: The position of the button to close the web view from the left edge in pixels.
      • closeImageY: The position of the button to close the web view from the top edge in pixels.
      • closeImageWidth: The width of the button to close the web view in pixels.
      • closeImageHeight: The height of the button to close the web view in pixels.

        Events:

      • intel.xdk.device.remote.close : The event is fired once a user touches the close image and the new web view is closed down.

        Example:

        
        intel.xdk.device.showRemoteSite("http://www.twitter.com/",280,0,50,50);
                    

        Version:

        This method is available in appMobi Version 3.2.0

      English

      showRemoteSiteExt Method

      $
      0
      0

      This function is used to show a remote web site in a different web view.

      intel.xdk.device.showRemoteSiteExt(url, closeImagePortraitX, closeImagePortraitY, closeImageLandscapeX, closeImageLandscapeY, closeImageWidth, closeImageHeight)

      Description:

      This function is used to show a remote web site in a different web view. Touching the close image will shut down the web view and return the user to the normal application view.

      The url parameter is for the new view’s target url. The image coordinates define the position, width, and height of the close image that the user may touch to close the web view. By default close image is set to 48x48 pixels and positioned in the upper left hand corner of the screen.

      When the close button is touched, it fires an intel.xdk.device.remote.close event.

      This method replaces the intel.xdk.device.showRemoteSite.

      Available Platforms:

      Parameters:

      • url: The URL for the web view to open.
      • closeImagePortraitX: The position of the button to close the web view from the left edge in pixels when the device is in the portrait orientation.
      • closeImagePortraitY: The position of the button to close the web view from the top edge in pixels when the device is in the portrait orientation.
      • closeImageLandscapeX: The position of the button to close the web view from the left edge in pixels when the device is in the landscape orientation.
      • closeImageLandscapeY: The position of the button to close the web view from the top edge in pixels when the device is in the landscape orientation.
      • closeImageWidth: The width of the button to close the web view in pixels.
      • closeImageHeight: The height of the button to close the web view in pixels.

        Events:

      • intel.xdk.device.remote.close : The event is fired once a user touches the close image and the new web view is closed down.

        Example:

        
        intel.xdk.device.showRemoteSiteExt("http://www.google.com/",280,0,50,50);
                    

        Version:

        This method is available in appMobi Version 3.3.0

      English
      Viewing all 531 articles
      Browse latest View live


      <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>