

A swipe from the left edge inwards scrolls to the left.Ī swipe from the top edge reveals the notifications and action center just like Android and iOS do, but completely different from how Windows 10 does. A swipe from the right edge inwards scrolls a page to the right. There are some fairly intuitive gestures to control the homescreen. There’s plenty of room to label the functions so that people can understand them.

I get that this is something Apple and Google do, but it still doesn’t make sense. Also, why should I be able to read all of the icons on the homescreen, but the ones in the bottom row don’t have text labels? It doesn’t make sense that I should be able to understand the functions at the top, but not the ones at the bottom. There’s also a dock of shortcuts at the bottom and these really aren’t necessary since the home screen is also customizable shortcuts. That means there’s more work in reading each page and re-orienting your finger to poke the things you want. A big difference is that it scrolls in pages as opposed to a user-controllable smooth scrolling list. The home screen is similar to the Windows 10 full-screen start menu in that you can launch programs from here and also customize it with widgets that show information.
#HTC SYNC MANAGER SUCKS PRO#
Using a Surface Pro in tablet mode only requires learning maybe 4 or 5 hidden gestures. It does not take up too much cognitive energy to learn 4 or 5 gestures. So that’s what, four hidden gestures that you need to learn in order to use a Surface Pro’s tablet mode? Now if you rotate the tablet into portrait mode, all of these gestures work the same (except there’s no room for split-screen mode.) Four hidden gestures to learn is not that bad.
#HTC SYNC MANAGER SUCKS FULL#
If you have two apps in split-screen mode, you can drag down on the title bar of one of them and then move it to the top middle to expand it to full screen. If you swipe all the way from the top edge to the bottom edge, this will close the foreground app.

Swiping down from the top essentially grabs the window’s title bar and then you can swing the gesture to the left or right in order to enable the split-screen snap mode. A swipe from the left edge shows the multi-task interface where you can choose an open app to switch to. A swipe from the right edge shows your notifications and action center. These are completely undiscoverable, but once you figure them out, they’re very useful. The Windows 10 tablet interface also has some hidden gestures. It has smooth scrolling which is great because you can manage the speed of scrolling yourself and you don’t have to re-orient your fingers or eyes as you would if these were full-screen pages. You can read the names of the icons and poke them to open the programs. The Start screen takes up the full screen in tablet mode and everything here is basically point and poke. The necessity for a row of persistent icons is lower.įirst let’s look at the tablet Interface for a Surface Pro running Windows 10. I get that these are similar to the bottom row of icons on most other Android launchers as well as the iPhone OS, but the Surface Duo is more like a tablet than a phone. There’s no congruity with Microsoft’s Surface devices in this design. Normal people aren’t going to understand many of the icons at the bottom and that’s not even the taskbar like on Windows 10. It would be so much easier for everyone if people could read the name of the button and instantly know what it does. Why are there 6 icons at the bottom that don’t have text labels while all of the others above do have text labels? Why should users be allowed to understand the icons in the middle part of the screens, but not the icons at the bottom? Every time I have to support clients, I have to describe unlabeled mystery meat icons to them by their shape and colors. The Surface Duo has some of the most basic user interface design problems.
