Google today held their annual developer conference in Mountain View, California and they announced a bunch of stuff. Few of the most important announcements were centred around Android. The operating system will receive a bunch of updates in its next version currently called Android P. Here is a rundown of the top features. Some of the announcements were made when the first developer preview was announced.
Every few years, there’s a development in the imaging world that bring the megapixel wars back to the front page for every photographer. After years of chasing densely packed sensors in their devices, most companies that make imaging devices settled on varying degrees of resolution. Based on sensor size, medium format cameras offer the highest resolution, but Hasselblad has just dropped a nuke on the whole megapixel war with the Hasselblad H6D-400c.
The new imaging device from the German camera maker is capable of shooting images with a whopping resolution of 400 megapixels. This, the camera achieves by shifting its 100 megapixel CMOS sensor by 1 pixel four times (once in each direction) followed by shifting the sensor by half a pixel twice. The six images combined give you incredibly detailed 400-megapixel images with “real RGB colour data for each pixel.” The images files contain so much information that each of them weighs a whopping 2.4GB and require the camera to be shot tethered to a laptop. Tethering is assisted by an onboard USB-3.0 port with a Type C connection.
Obviously, if you’re going to be using the multi-shot mode to create 400-megapixel images with the H6D-400c, then your usage is going to be severely limited to shooting just stationary objects. The price tag of $47,995 should also tell you that it is only for the very serious professional, but let’s face it, who wouldn’t love to take a selfie with this megapixel beast?
If you'd like to see just what level of detail the H6D-400c is capable of capturing, just go over to this link here
Google made jaws drop to the ground during the I/O keynote address with its demo of the Duplex calling system, a new technology capable of carrying out natural conversations over the phone with the ultimate goal of being able to assist users with ‘real world tasks’. During the demo, CEO Sundar Pichai played a recording of Duplex scheduling a hair salon appointment in real time. This was followed by another demo of the system making reservations at a restaurant.
While AI assistants and bots are fast becoming an indispensable part of our ‘smart’ existence, what’s truly remarkable about Duplex is its ability to mimic human conversational tone and style in executing the assigned tasks. These bots are able to comprehend natural speech and use words fillers like ‘Ohh, I gotcha’ and ‘Umm’, add the raised pitch of a question mark to queries, and exchange pleasantries. None of the typical robot-like voice, awkward sentence formations and struggle to recognize simple commands and words that we normally associate with present-day smart assistants exist with the Duplex. It can engage in a conversation, adjusting to the flow of human on the other side of the line instead of expecting the human to adapt to capabilities of the system. The result? Seamless, conversational experience where a human and a bot follow the natural flow of dialogue, just the way two humans would.
Making Bots Talk Like Humans
The Duplex system has been developed by constraining neural networks to closed domains and then training it extensively to carry out conversations within the confines of such domains. Despite its AI prowess, Duplex cannot carry out general conversations outside these domains.
At the heart of this technology is Google’s vision to make conversations with machines comfortable and natural. Even so, training a machine learning AI system to conduct natural conversations is no mean feat. Generating natural sounding speech with the right intonations is a challenge in itself, add to that complexities of conversational language and natural behaviour, and getting a bot to talk like humans in real time and real-life scenarios almost sounds like an improbable proposition.
After years of research, Google has finally found a way to do it. At the core of this system is a recurrent neural network (RNN) that has been built using TensorFlow Extended (TFX) and then trained on anonymised phone conversation data to tide over these myriad challenges. In its blog post explaining the technology that Duplex is built on, Google said, “The network uses the output of Google’s automatic speech recognition (ASR) technology, as well as features from the audio, the history of the conversation, the parameters of the conversation (e.g. the desired service for an appointment, or the current time of day) and more. We trained our understanding model separately for each task, but leveraged the shared corpus across tasks. Finally, we used hyperparameter optimization from TFX to further improve the model.”
Efficient and Natural
In spontaneous speech, people talk faster and less coherently. The Duplex system is not only trained to keep up with the pace of conversations but also pick out the right context from complex sentences. For instance, in the demo, where Duplex is making a reservation at a restaurant, the request is for a table for four on Wednesday at 7 pm, but the human on the other end of the line misunderstands it as table for 7. Duplex, however, does not get confused. Instead, it emphasises that the table is for four people, reservation is for 7 pm.
Apart from its efficiency, another aspect that had people floored was Duplex’s ability to keep conversations natural. Google explains this has been achieved through a combination of “concatenative text to speech (TTS) engine and a synthesis TTS engine (using Tacotron and WaveNet) to control intonation depending on the circumstance.”
The minds behind this AI-powered system have paid extra attention to detail, including factors such as speech disfluencies – the use of uh’s and hmmm’s – and speech latency, and trained the system accordingly. For instant, when a person says ‘hello’, they expect an instant reply. To keep the latency low, Google has relied on faster, low-confidence models such as speech recognition or end-pointing. In certain cases when a low latency requirement is coupled with hesitant responses, the system bypasses the RNN altogether and relies on faster approximations to give a suitable reply without delay. On the other hand, developers have also understood low latency is not the way to go always. In some cases, for instance, when replying to complex sentences or queries, there needs to be some gap in between dialogues, to make the conversation feel natural. And the Duplex system has been trained to do just that. Just the way, a human would take time to process information passed on in a complex sentence and think of a suitable reply, the Duplex too takes the right pauses before responding to such conversations.
Google Duplex is all about making supported tasks more convenient. A user just has to interact with the Google Assistant instead of making actual phone calls for chores likes making appointments and reservations. Apart from being extremely convenient, the Duplex system is equipped to operate in an asynchronous manner, requesting reservations even during off-hours and with limited connectivity. It can also prove to be a great tool for transcending language and accessibility barriers, assisting users with hearing disabilities or those unfamiliar with a native language carry out everyday task with greater ease.
The Google Duplex is a fully automated system, capable of pulling off sophisticated conversations without human involvement. This is achieved by training the system in any new domain with real-time supervision. Just the way, you’d train a student or new employee in a new discipline or role, making them work under close supervision at all times. In this case, experienced operators have been picked out to act as instructors. These operators monitor the system and tweak its behaviour in real time until it achieves a desired level of quality. Once the Duplex system is well-trained in a given domain, the supervision stops and it can make calls autonomously.
Duplex also comes with self-monitoring capabilities that allow it to assess which tasks are beyond its capability, at which point it ceases to act autonomously and signals a human operator to take over.
Google intends to start testing this system within its Assistant this summer. So, you can expect the Google Assistant to support Duplex functionalities in the near future.
The burgeoning smartphone market of India has many parallels with the movie industry, wherein the shelf life of both, a new smartphone or film remains in the buzz for a limited amount of time. However, there are some movies which are able to defy this norm by being in the people’s mind for a long time. If you think about it, then there are some mobile phones as well which are able to achieve the same feat. One such sleeper hit is Panasonic’s Eluga Ray 700, which has been in the market since September 2017.
As per the brand, the sub-Rs 10K smartphone has been a runaway hit for the company, exceeding all expectations. Perhaps that’s why it comes as a no surprise that the phone has been out of stock on all reseller platforms for quite some time. Considering the smartphone comes with a number of features, in terms of both hardware and software, the phone has become an attractive proposition for consumers. If you’re still on the fence about purchasing a budget smartphone, then read on to find why the Eluga Ray 700 fits the bill.
One of the strongest aspects that have made the Panasonic Eluga Ray 700 a favourite among consumers is its 5,000mAh battery. With a battery this massive, the phone can easily last more than a day and a half. In fact, with moderate usage, users have reported that they’ve been able to use it for two days without requiring a juice up. Furthermore, while charging, the smartphone tends to keep its thermals in check, meaning, if you pick up the phone at any point of time, you won’t be caught off guard with high temperatures.
With a powerful battery, the smartphone also becomes an ideal device for consuming multimedia on the go, thanks to its 5.5-inch display. Add to it the fact that the IPS screen offers a resolution of 1,920 x 1,080 pixels, ensuring crisp visuals and sharp text. There are hardly any good smartphones in this price range that offer a full HD panel, so it’s not surprising that the Eluga Ray 700 ended up becoming such a hot seller.
However, not everyone who has a smartphone is using them just to watch movies or look at photos. Sure, that’s something everyone enjoys, but there are people who use their phones for more work and less play. The Panasonic Eluga Ray 700 takes care of this aspect too. Under the hood, the phone is powered by an octa-core MediaTek processor mated to 3GB of RAM. The combination ensures that the device can handle anything thrown at it. For the workaholics, this means that you can easily multi-task and switch between various browser windows and email clients so that you can work on your presentation and files with ease. While for gamers, the hardware ensures that you can enjoy some casual sessions of racing or shooting titles, without being annoyed with lags or massive frame drops.
Last but not the least, the Eluga Ray700 comes with impressive imaging credentials too. The smartphone offers 13-megapixel cameras on both the front and back. The primary sensor also offers Phase Detect AF, improving the AF performance over non-PDAF smartphones that are available in this price range.
For Rs 10,000, it starts to become more evident as to why the Panasonic Eluga Ray 700 continues to be a strong seller for the Japanese company. The focus on overall specs makes it ideal for a vast majority of users, be it as a work phone or a multimedia-centric smartphone. So, if you’re looking for a reliable smartphone, you should probably grab the Eluga Ray 700 before it goes out of stock, again.
Google has been pushing the Assistant since launching it 3 years ago. Last year Google even introduced the Assistant in Hindi in India. The company said that by the end of the year
— Google India (@GoogleIndia) May 8, 2018
Expectations were running high from Google's annual developer conference, I/O 2018 and just like every year, the tech giant more than delivered on the first day of the mega event. Amongst a hoard of announcements, Google notably revealed a whole of bunch of changes to its photo storing & managing service, Google Photos. According to the company, Google Photos now has more than 500 million active users.
Google took to a blog post to announce all these changes in a more detailed manner. One of the most prominent features of the new Google Photos is that the app is all set to become a lot more interactive. The company, in its blog post, revealed that Google Photos will soon be able to make suggestions to improve the pictures in your library.
Powered by 'machine learning', the app will soon be able to make suggestions such as adjusting brightness, hiding a screenshot and share a picture amongst others. This suggestion will be only visible on relevant photos and users will be able to complete these actions in a single tap.
In addition, Google Photos will now also integrate Artificial Intelligence more than ever before. The app will use AI to facilitate colour editing for pictures. With a single touch, users will be able to give their pictures an artistic touch as the AI will identify the subject and keep it in colour while making the background black and white. The company also announced in its blog post that it is working on allowing users to change their old black and white pictures into colour with just a single tap. These features will be available in the Assistant tab of Google Photos. Additionally, Google also said that it will use AI to help separate subjects in photos or recognise images of documents and automatically convert them to PDF files.
Though there is no definite timeline on when these features will actually make it to Google Photos, the company seems to be really banking on the photo managing app's popularity. Keeping just that in mind, Google has also released new APIs for developers that can plug into Google Photos services directly. What this means is that developers will now be able to take Google Photos' reach much beyond smartphones and computers, into other photos. Brace yourselves for a lot more apps and devices running Google Photos, people!