Saturday, January 28, 2017

Guide: Installing and Running a GNU/Linux Environment on Any Android Device

As many of you may well be aware, the Android operating system is powered by the Linux kernel underneath. Despite the fact that both Android and GNU/Linux are powered by the same kernel, the two operating systems are vastly different and run completely different types of programs.

Sometimes, however, the applications available on Android can feel a bit limited or underwhelming, especially when compared to their desktop counterparts. Fortunately, you can get a GNU/Linux environment up and running on any Android device, rooted or non-rooted. (The following instructions assume a non-rooted device.)

For those power users on Android tablets, or other Android devices that have large screens (or can plug into a bigger screen), the ability to run desktop Linux software can go a long way towards increasing the potential that an Android device has for productivity.


Setting Up GNU/Linux on Android

To get a GNU/Linux environment set up on your Android device, you only need to install two applications from the Google Play store: GNURoot Debian and XServer XSDL. After you do that, you will only need to run a small handful of Linux commands to complete the installation.

GNURoot Debian provides a Debian Linux environment that runs within the confines of the Android application sandbox. It accomplishes this by leveraging a piece of software called proot, a userspace re-implementation of Linux's chroot functionality, which is used to run a guest Linux environment inside of a host environment. Chroot normally requires root access to function, but by using proot you can achieve similar functionality without needing root privileges.

GNURoot comes with a built-in terminal emulator for accessing its Debian Linux environment. This is sufficient for running command-line software, however, running graphical software requires an X server to be available as well. The X Window System was designed to have separate client and server components in order to provide more flexibility (a faster, more powerful UNIX mainframe could act as the client to X server instances running on much less powerful and less sophisticated terminals).

In this case, we will use a separate application, XServer XSDL, that GNURoot applications will connect to as clients. XServer XSDL is a complete X server implementation for Android powered by SDL that has many configurable options such as display resolution, font size, different types of mouse pointer behavior, and more.


Step-by-Step Guide

1. Install GNURoot Debian and XServer XSDL from the Play Store.

2. Run GNURoot Debian. The Debian Linux environment will unpack and initialize itself, which will take a few minutes. Eventually, you will be presented with a "root" shell. Don't get misled by this – this is actually a fake root account that is still running within the confines of the Android application sandbox.

3. Run apt-get update and apt-get upgrade to ensure you have the most up-to-date packages available on your system. Apt-get is Debian's package management system that you will use to install software into your Debian Linux environment.

4. Once you are up-to-date, it's time to install a graphical environment. I recommend installing LXDE as it is simple and light-weight. (Remember, you're running Debian with all the overhead of the Android operating system in the background, so it's best to conserve as many resources as you can.) You can either do apt-get install lxde to install the desktop environment along with a full set of tools, or apt-get install lxde-core to only install the desktop environment itself.

5. Now that we have LXDE installed, let's install a few more things to complete our Linux setup.

XTerm – this provides access to the terminal while in a graphical environment
Synaptic Package Manager – a graphical front-end to apt-get
Pulseaudio – provides drivers for playing back audio

Run apt-get install xterm synaptic pulseaudio to install these utilities.

6. Finally, let's get the graphical environment up and running. Start XServer XSDL and have it download the additional fonts. Eventually you will get to a blue screen with some white text – this means that the X server is running and waiting for a client to connect. Switch back to GNURoot and run the following two commands:

  export DISPLAY=:0 PULSE_SERVER=tcp:127.0.0.1:4712  startlxde &  

Then, switch to XServer XSDL and watch the LXDE desktop come up onto your screen.

I recommend putting the above two commands into a shell script so that you can easily restart LXDE if you close the session or if you need to restart your device.


Installing Linux Applications

Congrats! You've successfully gotten Debian Linux up and running on your Android device, but what good is running Linux without apps? Fortunately, you've got a massive repository of Linux applications at your fingertips just waiting to be downloaded. We'll use the Synaptic Package Manager, which we installed earlier, to access this repository.

Click the "start" button at the lower-left hand corner, click Run, and then type synaptic. The Synaptic Package Manager will load. From here, simply press the Search button at the top and then type the name of the application you'd like to install. Once you've found an application, right click it and select "Mark for Installation". When you are finished marking packages, click the Apply button at the top to start the installation. Uninstalling packages follows the same procedure, except by right-clicking and selecting "Mark for Removal" instead.

Of course, since this isn't a real Linux installation but rather a Linux environment running on top of, and within the constraints of, Android, there are a couple of limitations to be aware of. Some applications will refuse to run or will crash, usually due to the fact that some resources that are usually exposed on GNU/Linux systems are kept hidden by Android. Also, if a regular Android app can't do something, then usually a Linux application running within Android can't as well, so you won't be able to perform tasks such as partitioning hard drives. Lastly, games requiring hardware acceleration will not work. Most standard everyday apps, however, will run just fine. Some examples include Firefox, LibreOffice, GIMP, Eclipse, and simple games like PySol.


I hope that you find this tutorial useful. While I personally performed these steps on my Google Pixel C, you can do this on most Android devices. Preferably on a tablet device with access to keyboard and mouse peripherals, of course. If you already run a GNU/Linux distribution on your Android device, let us know what you are using it for below!



from xda-developers http://ift.tt/2jCgAqZ
via IFTTT

Rovo89: Update on Development of Xposed for Nougat

The reason why I personally continue to use Android 6.0 Marshmallow on my OnePlus 3, despite OnePlus pushing out the Nougat update for the phone to stable channels, is the presence of Xposed. The Xposed framework and the module ecosystem forms a crucial part of the Android experience that I prefer — to the point where I am willing to forego the latest OS update from the OEM just to savor this sweet fruit.

While Xposed for Nougat is taking a while to come along and some of us do not mind waiting further, it had been a while since we last heard on the progress of the project.

XDA Senior Recognized Developer rovo89 took some time to inform us on the current situation regarding the Xposed for Nougat project:

"It seems that more and more people get nervous about whether (and when) there will be Xposed for Nougat or not, so I felt I should say something.

Why does it take that long? Because with every release, I try to ensure that Xposed integrates nicely with the improvements in the new ART version. The step from Lollipop to Marshmallow wasn't huge. It was an evolution, some things even made it possible to integrate Xposed in a more elegant way. On the whole, it was mainly careful porting than rather innovating.

With Nougat, something fundamental has changed. If you're using Nougat already, you'll have noticed that installations are much faster now. That's because APKs aren't compiled immediately (AOT), but start in (slower) interpreting mode. Sounds bad, but they have enabled JIT, which will quickly compile those methods that are used very often. That will restore the well-known and constantly improving performance of native code. Besides that, ART keeps a list of these frequently used methods ("profiling"). When the device is idle, it finally does the AOT compilation, but based on the profiling data. After that, you get the great performance right after starting the app. JIT is still waiting in case the usage patterns change, and I think it will also adjust the profile and improve the AOT compilation.

That results in various different compilation states and more complexity. Besides that, there were many issues in the past caused by Xposed's need to recompile the whole ROM and all apps: It sometimes caused boot loops when the odex files were too heavily pre-optimized, it blocked quite some storage space to store the recompiled files, and I needed to disable some optimizations like inlining and direct pointer calls. I hope that I can make use of the JIT compiler to avoid that in Nougat. If Xposed knew from where a method is called, it could invalidate the callers' compiled code, so that they would temporarily use the interpreter. If they're important enough, JIT will recompile them.

I have already done a lot of research and experiments for this and I'm currently trying to implement this. But as you can imagine, all of that is much effort and can easily take hundreds of hours….." <continued in forum post>

The main issue as usually is in hobbyist projects, is the allocation of time, and we understand where rovo89 is coming from. Even as the Xposed project currently stands, it includes months of efforts from various developers to help make it possible for the end user to enjoy in such a simple and distributable manner.

As they say, Rome was not built in a day, but the bricks were laid every hour:

"So yes, I'm still working on Nougat support, whenever my free time allows it, but I don't have any idea when it will be done. Once it's done, you'll know."

rovo89

Android isn't perfect and Xposed is that what allows us to fix what the original developer won't. The wait for the ultimate Android fix continues on the newest OS, and we wish rovo89 the best of luck from our end.

You can read the full statement in the forum post. Are you waiting for Xposed too? Let us know in the comments!



from xda-developers http://ift.tt/2kyQF1C
via IFTTT

New Leak Shows the LG Watch Style In Silver and Rose Gold, may Start at $249

We've known for a while now that LG is likely working on two new Android Wear watches allegedly called the LG Watch Style and the Watch Sport. While we have already seen images of both the LG Watch Style and Watch Sport, those were unfortunately quite low resolution renders. Now, a new leak from  Evan Blass (@evleaks) has gives us a clearer picture (literally) of what LG's upcoming Android Wear smartwatch, the Watch Style, may look like.

The image of the LG Watch Style shared by Evan Blass match those leaked by TechnoBuffalo a few day ago.

As you can see in the image, the renders show the LG Watch Style in both silver and rose gold colors with sporting leather straps. In terms of design, the Watch Style appears to be a classic fashion watch unlike its counterpart, the Watch Sport, which is said to be the bigger of two watches and will likely feature heart rate sensor, GPS, and cellular connectivity.

The LG Watch Style and LG Watch Sport are expected to launch at Google's platform event on February 9 where Google is expected to detail their much awaited Android Wear 2.0 update as well. Both watches are said to be manufactured by LG and Google in a Nexus-style collaboration, meaning the hardware will be handled by LG with Google providing the software and any future updates.

If previous rumors are to believed, the LG Watch Style will feature a 1.2″ 360×360 resolution AMOLED screen, 240mAh battery, 512 MBs of RAM and will have Bluetooth (no cellular radio) for connectivity. Furthermore, according to a source speaking with AndroidPolice, the Watch Style will launch at a price point of $249. Although for more concrete details, we will have to wait for the official announcement.


Source: @evleaks Source: AndroidPolice



from xda-developers http://ift.tt/2jIAJcu
via IFTTT

A Guide to Editing RAW Photography — Get the Most out of Your Smartphone’s Camera

­­

After exploring the RAW capabilities of my OnePlus 3T and Sony NEX-5 cameras, an array of readers responded with questions and comments on RAW photography and their experiences. Many expressed the desire to better learn how to edit photography and particularly how to deal with RAW file formats on both mobile devices and desktop operating systems, and I was thrilled to see such a willingness to engage in something new like RAW photography. I was also deeply happy to have several readers relate to me that I had inspired them to explore photography in general once again or even for the first time –it can come as a surprise to many that the device in their pockets is often their best choice for exploring. In light of these discoveries, my hope is that some assistance for those struggling to begin will continue to encourage those interested in photography, RAW or not, to persevere.

Remembering back to my first forays into photography and editing, I was lucky enough to ease into the prospect bit by bit, beginning with something as simple as the built-in editor in my HTC Incredible 2's gallery app. If I am remembering correctly, I stumbled upon Adobe Lightroom as an app for my iPad 3, which became my go-to editing device until I built my first desktop PC. Over the course of a month or so, I essentially explored each slider and option until I was relatively familiar with the program. I can easily recommend this to anyone with a lot of patience and curiosity, as you will inevitably find your own preferences along the way while also learning to use a powerful editing suite independently.

Nevertheless, having someone to guide you through the very first steps of editing and break down the menacing façade that Lightroom and other editors can present the user is of course extremely useful. I will attempt to be that guide!


First Steps

As several curious and intrepid readers soon discovered, shooting in RAW is not necessarily the most intuitive experience, especially once one goes to find or edit the RAW format files they have produced. As RAW files, especially DNGs, are innately not images straight out of camera, nearly all gallery apps simply will not register that they exist, both on mobile and desktop operating systems. This is not a criticism of gallery apps, but rather an unavoidable reality of RAW formats. As such, you will want to either install one of a handful of free RAW file managers, or bite the bullet and pay for something like Photo Mate R3 (~$8). Adobe Lightroom for mobile devices is likely your absolute best option, being free and well-designed.

For those of you looking for something a bit different, Photo Mate R3 is a fully-fledged mobile editor with almost all of the granular controls that Lightroom and other desktop editors offer. It also provides a gallery function with an array of sorting options, allowing the viewer to, say, selectively view only RAW format images and preview their thumbnails. The only major downside I noted is a lack of granular noise reduction controls of the sort that Lightroom offers. RAW files express all the noise the camera generates (a lot) and can appear rather off-putting if one does not first consider that lossy formats like JPEGs include some often heavy-handed noise reduction that occurs as the RAW data is converted and compressed. RAW lets you decide how much noise reduction is needed, potentially preventing the overly-soft images that smartphone cameras are often infamous for.

If you have access to a computer, there are numerous free options for editing RAW photography like GIMP and Rawtherapee. Rawtherapee offers a genuinely impressive program that is solely dedicated to editing RAW format images and is easy to recommend. There is also Google's free Nik editing suite, which offers a dedicated program for noise reduction to assist those on a budget who can't stand noise but would prefer to keep their editing workflow as mobile as possible.

A brief glance at Rawtherapee 5.0's interface (Rawtherapee).

For those of you willing to fork over the cash, however, my one true photo editing love has always been Adobe Lightroom. It may be an irrational attachment to the program I am simply most familiar with, but I find that it offers a wonderful, intuitive interface and an almost invaluable organizational aspect that allows you to comfortably back up a database of around 40+ GB of edited photos while still retaining exact change histories and the original files. While next to nothing compared to professional photographers or very serious amateurs, I've taken and edited thousands of photos in the 5 years I've been active, and have a history of almost every single one in my Lightroom library.

A small snippet of my primary Lightroom catalog. My edited photos can be found at my Flickr and VSCO accounts.

While verifying that my understanding of Adobe Lightroom mobile was accurate, I discovered that free users can in fact edit RAW formats without a CC subscription! While the free version loses a number of features, it is still well-featured and includes several noise reduction filters, albeit without the ability to control it (aside from picking low, medium, and high reduction options). Like Photo Mate R3, the Lightroom app offers a useful gallery feature that lets you preview RAW thumbnails and filter out non-RAW images. This app is definitely my recommendation for those looking for a slick, user-friendly solution. While experienced users may find some improved utility in Photo Mate R3's broader range of options, Lightroom will be more than enough for most mobile editors. This article provides a great overview of the app and its RAW editing features.


General Tips and Suggestions for Editing Photography

While providing granular tutorials for each of the applications mentioned above is a bit beyond the scope of this article, what I can do is explain some of the more common options you will have at your disposal, regardless of which one you choose to adopt. I will be using the desktop version of Adobe Lightroom (5.4) to demonstrate these features. After the process of finding your RAW files (usually .DNGs for mobile devices) and importing them into your app of choice, you will be presented with several options. Generally speaking, these options will be intended to modify the tone (exposure/lighting), white balance, and color in your photos.

Some of the most useful and intuitive methods of editing in Lightroom are relatively unique to it and even then only in the desktop app. My favorite ways to modify a photo's tone are through the histogram (the graph at the top of the screenshot below), which allows you to click on one of five sections (blacks, shadows, exposure, whites, highlights) and drag them left or right to reduce or increase the prevalence of that specific light type. The tone curve, found below the Basic section, can also be dragged about in a similar fashion, but is generally only needed for slightly modifying a nearly-complete image or recovering detail in an image that was drastically over- or underexposed. This can all generally also be done with the sliders you can see on the right, but this takes somewhat longer and is also not nearly as fun! A great exploration of the utility of histograms and how to read them can be found here.

Two images and their related histograms.

Traveling down the options in the menu pictured below, we begin with 'WB' or white balance. This is used to improve accuracy of the color representation in photos, accomplished by modifying the temperature and tint in order to direct the picture towards your preferred outcome, which may include fixing imperfect white balancing in camera. In desktop and mobile Lightroom, you have the option of selecting the eye dropper, which effectively auto-corrects white balance once you direct it to a point on your photo that you know should be a neutral grey or white.

Tone settings come next, beginning with options for exposure and contrast. Exposure modifies the global brightness unselectively. Contrast further darkens darker areas of the image and brightens lighter areas. After these more heavy-handed options, there are more precise controls that can also be controlled through the histogram on top, as I previously explained. The highlights slider will modify only the brightest sections of the image, allowing you to tame overexposed images (you may have seen or heard the term "blown highlights"). Shadows, on the opposite hand, can help recover lost detail in dark areas of images. Lastly, Whites and Blacks intuitively allow pixels leaning towards white or black to be made brighter or darker. Attentive readers may notice a theme so far of combinations of controls that offer large changes (whites, blacks) with controls that offer more detailed modifications to smaller parts of the image (highlights, shadows).

Continuing this trend, Clarity is effectively a method of only adding contrast to mid-tones (mid meaning middle of the histogram). In doing so, the Clarity slider can give the benefit of added contrast while preventing the noise or grain (and often an uglier image) that can come overuse of the global Contrast slider. This option is generally unique to Lightroom, but it can be partially replicated by experimenting with white and black levels (increased contrast would mean darker blacks and brighter whites). This method won't add edge detail like Clarity, but it will more subtly add contrast.

Saturation and Vibrance are the last basic settings one may frequently want to use. Saturation is the color equivalent of Exposure, allowing the user to globally deepen or lighten all colors in an image. Vibrance helps to avoid the downfall of global saturation changes by only adjusting the least (+) or most (-) saturated colors.

Finally, there are several more complex and granular settings that can be found in Lightroom and other desktop editing suites. Something I often find myself using is detailed saturation, hue, and luminance control (on the right), giving me the ability to, say, recover oversaturated blues or greens, or better express the yellows and oranges in a sunset photo with subpar white balance. The Detail section (on the left) is where noise reduction and sharpening settings can be found, very useful options to have when editing RAW files. Lightroom helpfully provides a small window with a highly magnified view, which makes it considerably easier to avoid introducing ugly artifacts or obscuring detail when modifying sharpness and adding noise reduction.

                     


Practice, Practice, and More Practice!

As a tried-and-true trope of many a guide, my best suggestion for those just beginning to stretch their photography-editing legs is to not give up and keep trying. Mistakes will be made and modifications will be overdone, but in time you will begin to develop a more instinctive understanding of editing and likely come into a style and workflow of your own. Mine has taken many years to develop and I clearly remember struggling at first, as well as taking a look at photos I'd edited years ago only to be aghast at the aesthetic decisions of past-me. I'm still learning more than 5 years in, and I even managed to learn a couple new things about editing photos in the process of writing this. In all its breadth, photography is essentially an activity with constant opportunity for learning, and rather than being daunting, it simply makes it that much more exciting and rewarding.

Amidst the humbling response my previous article received, multiple readers shared some of their own impressive smartphone photography and blew me away. If you have taken any photos with your phone that you are proud and would like to share, feel free to post them in the comments below this article, as well as on its corresponding Facebook posts or tweets. An upcoming article in this series will include a collection of user-submitted photography, so don't miss out!

Also ahead will be a brief tutorial on how to use the manual mode available on many modern smartphone cameras in order to best take advantage of their capabilities. 



from xda-developers http://ift.tt/2kyrDQr
via IFTTT

Friday, January 27, 2017

AutoVoice Integration Finally makes its way to Google Home, Here’s how to Use It

After a month in Google's approval limbo, AutoVoice has finally been approved for use as a third-party integration in Google Home. With AutoVoice integration, you can send commands to your phone that Tasker will be able to react to, allowing you to perform countless number of automation scripts straight from your voice.

Previously, this required a convoluted workaround involving IFTTT sending commands to your device via Join, but now you can send natural language commands straight to your device. We at XDA have been awaiting this release, and now that it's here, we'll show you how to use it.


The True Power of Google Home has been Unlocked

The above video was made by the developer of AutoVoice, Joao Dias, prior to the approval of the AutoVoice integration. I am re-linking it here only to demonstrate the possibilities of this integration, which is something we can all now enjoy since Google has finally rolled out AutoVoice support for everyone. As with any Tasker plug-in, there is a bit of a learning curve involved, so even though the integration has been available since last night, many people have been confused as to how to make it work. I've been playing with this since last night and will show you how to make your own AutoVoice commands trigger through speaking with Google Home.

A request from Joao Dias, developer of AutoVoice: Please be aware that today is the first day that AutoVoice integration with Google Home is live for all users. As such, there may be some bugs that have yet to be stamped out. Rest assured that he is hard at work fixing anything he comes across before the AutoVoice/Home integration is released to the stable channel of AutoVoice in the Play Store.


Getting Started

There are a few things you need to have before you can take advantage of this new integration. The first, and most obvious requirement, is the fact that you need a Google Home device. If you don't have one yet, they are available in the Google Store among other retailers. Amazon Alexa support is pending approval as well, so if you have one of those you will have to wait before you can try out this integration.

Once you have each of these applications installed, it's time to get to work. The first thing you will need to do is enable the AutoVoice integration in the Google Home app. Open up the Google Home app and then tap on the Remote/TV icon in the top right-hand corner. This will open up the Devices page where it lists your currently connected cast-enabled devices (including your Google Home). Tap on the three-dot menu icon to open up the settings page for your Google Home. Under "Google Assistant settings" tap on "More." Finally, under the listed Google Home integration sections, tap on "Services" to bring up the list of available third-party services. Scroll down to find "AutoVoice" in the list, and in the about page for the integration you will find the link to enable the integration.

Once you have enabled this integration, you can now start talking to AutoVoice through your Google Home! Check if it is enabled by saying either "Ok Google, ask auto voice to say hello" or "Ok Google, let me speak to auto voice." If your Google Home responds with "sure, here's auto voice" and then enters the AutoVoice command prompt, the integration is working. Now we can set up AutoVoice to recognize our commands.


Setting up AutoVoice

For the sake of this tutorial, we will make a simple Tasker script to help you locate your phone. By saying any natural variation of "find my phone", Tasker will start playing a loud beeping noise so you can quickly discern where you left your device. Of course, you can easily make this more complex by perhaps locating your device via GPS then sending yourself an e-mail with a picture taken by the camera attached to it, but the part we will focus on is simply teaching you how to get Tasker to recognize your Google Home voice commands. Using your voice, there are two ways you can issue commands to Tasker via Google Home.

The first is by speaking your command exactly as you set it up. That means there is absolutely no room for error in your command. If you, for instance, want to locate your device and you set up Tasker to recognize when you say "find my phone" then you must exactly say "find my phone" to your Google Home (without any other words spliced in or placed at the beginning or end) otherwise Tasker will fail to recognize the command. The only way around this is to come up with as many possible variations of the command as you can think of, such as "find my device", "locate my phone", "locate my device" and hope that you remember to say at least one variant of the command you set up. In other words, this first method suffers from the exact same problem as setting up Tasker integration via IFTTT: it is wildly inflexible with your language.

The second, and my preferred method, is using Natural Language. Natural Language commands allow you to speak naturally to your device, and Tasker will still be able to recognize what you are saying. For instance, if I were to say something much longer like "Ok Google, can you ask auto voice to please locate my device as soon as possible" it will still recognize my command even though I threw in the superfluous "please" and "as soon as possible" into my spoken command. This is all possible thanks to the power of API.AI, which is what AutoVoice checks your voice command against to interpret what you meant to say and return with any variables you might have set up.

Sounds great! You are probably more interested in the second option, as I was. Unfortunately, the Natural Language commands are taxing on Mr. Dias's servers so you will be required to sign up for a $0.99 per month subscription service in order to use Natural Language commands. It is a bit of a downer that this is required, but the fee is more than fair considering how low it costs and how powerful and useful it will make your Google Home.

Important: if you want to speak "natural language commands" to your Google Home device, then you will need to follow these next steps. Otherwise, skip to creating your commands below.


Setting up Natural Language Commands

Since AutoVoice relies on API.AI for its natural language processing, we will need to set up an API.AI account. Go to the website and click "sign up free" to make a free account. Once you are in your development console, create a new agent and name it AutoVoice. Make the agent private and click save to create the agent. After you save the agent, it will appear in the left sidebar under the main API.AI logo.

Once you have created your API.AI account, you will need to get your access tokens for AutoVoice can connect to your account. Click on the gear icon next to your newly created agent to bring up the settings page for your AutoVoice agent.

Under "API keys" you will see your client access token and your developer access token. You will need to save both. On your device, open up AutoVoice beta. Click on "Natural Language" to open up the settings page and then click on "Setup Natural Language." Now enter the two tokens into the given text boxes.

Now AutoVoice will be able to send and receive commands from API.AI. However, this functionality is restricted until you subscribe to AutoVoice. Go back to the Natural Language settings page and click on "Commands." Right now, the command list should be empty save for a single command called "Default Fallback Intent." (Note in my screenshot, I have set up a few of my own already). At the bottom, you will notice a toggle called "Use for Google Assistant/Alexa." If you enable this toggle you will be prompted to subscribe to AutoVoice. Accept the subscription if you wish to use Natural Language commands.


Creating Tasker Profiles to react to Natural Language Commands

Open up Tasker and click on the "+" button in the bottom right hand corner to create a new profile. Click on "Event" to create a new Event Context. An Event Context is a trigger that is only fired once when the context is recognized – in this case, we will be creating an Event linked to an AutoVoice Natural Language Command. In the Event category, browse to Plugin –> AutoVoice –> Natural Language.

Click on the pencil icon to enter the configuration page to create an AutoVoice Natural Language Command. Click on "Create New Command" to build an AutoVoice Command. In the dialog box that shows you, you will see a text input place to input your command as well as another text entry spot to enter the response you want Google Home to say. Type or speak the commands you want AutoVoice to recognize. While it is not required for you to list every possible variant of the command you want it to recognize, list at least a few just in case.


Pro-tip: you can create variables out of your input commands by long-pressing on one of the words. In the pop-up that shows up, you will see a "Create Variable" option alongside the usual Cut/Copy/Select/Paste options. If you select this, you will be able to pass this particular word as a variable to API.AI, which can be returned through API.AI. This can be useful for when you want Google Home to respond with variable responses.

For instance, if you build a command saying "play songs by $artist" then you can have the response return the name of the artist that is set in your variable. So you can say "play songs by Muse" or "play songs by Radiohead" under the same command, and your Google Home will respond with the same band/artist name you mentioned in your command. My tutorial below does not make use of this feature as it is reserved for more advanced use cases.


Once you are done building your command, click finished. You will see a dialog box pop up asking for what you want to name the natural language command. Name it something descriptive. By default it names the command after the first command you entered, which should be sufficient.

Next, it will ask you what action you want to set. This allows you to customize what command is send to your device, and it will be stored in %avaction. For instance, if you set the action to be "findmydevice" the text "findmydevice" will be stored in the %avaction variable. This won't serve any purpose for our tutorial, but in later tutorials where we cover more advanced commands, we will make use of this.

Exit out of the command creation screen by clicking on the checkmark up top, as you are now finished building and saving your natural language command. Now, we will create the Task that will fire off when the Natural Language Command is recognized. When you go back to Tasker's main screen, you will see the "new task" creation popup. Click on "new task" to create a new task. Click on the "+" icon to add your first Action to this Task. Under Audio, click on "Media Volume." Set the Level to 15. Go back to the Task editing screen and you will see your first action in the list. Now create another Action but this time click on "Alert" and select "Beep." Set the Duration to 10,000ms and set the Amplitude to 100%.

If you did the above correctly, you should have the following two Actions in the Task list.

Exit out of the Task creation screen and you are done. Now you can test your creation! Simply say "Ok Google, ask auto voice to find my phone" or any natural variation of that that comes to mind and your phone should start loudly beeping for 10 seconds. The only required thing you have to say is the trigger to make Google Home start AutoVoice – the "Ok Google, ask auto voice" or "Ok Google, let me speak to auto voice" part. Anything you say afterwards can be as freely flowing and natural as you like, the magic of API.AI makes it so that you can be flexible with your language!

Once you start creating lots of Natural Language Commands, it may be cumbersome to edit all of them from Tasker. Fortunately, you can edit them straight from the AutoVoice app. Open AutoVoice and click on "Natural Language" to bring up its settings. Under Commands, you should now see the Natural Language command we just made! If you click on it, you can edit nearly every single aspect of the command (and even set variables).


Creating Tasker Profiles to react to non-Natural Language Commands

In case you don't want to subscribe to AutoVoice, you can still create a similar command as above, but it will require you to list every possible combination of phrases you can think of to trigger the task. The biggest different between this setup is that when you are creating the Event Context you must select AutoVoice Recognized rather than AutoVoice Natural Language. You will build your command list and responses in a similar manner, but API.AI will not handle any part of parsing your spoken commands so you must be 100% accurate in speaking one of these phrases. Of course, you will still have access to editing any of these commands much like you could with Natural Language.

Otherwise, building the linked Task is the same as above. The only thing that differs is how the Task is triggered. With Natural Language, you can speak more freely. Without Natural Language, you have to be very careful how you word your command.


Conclusion

I hope you now understand how to integrate AutoVoice with Google Home. For any Tasker newbies out there, getting around the Tasker learning curve may still pose a problem. But if you have any experience with Tasker, this tutorial should serve as a nice starting point to get you to create your own Google Home commands. Alternatively, you can view Mr. Dias' tutorial in video form here.

In my limited time with the Google Home, I have come up with about a dozen fairly useful creations. In future articles, I will show you how to make some pretty cool Google Home commands such as turning on/off your PS4 by voice, reading all of your notifications, reading your last text message, and more. I won't spoil what I have in store, but I hope that this tutorial excites you for what will be coming!



from xda-developers http://ift.tt/2kCU2rs
via IFTTT