Example OF Human Computer Interaction Using Multi-Touch Screens And Mouse-Driven Screens Research Paper
Mouse driven screens are display tools where pointer devices such as the mouse, joystick, or the touchpad are used to control and interact with devices like computers and tablets. On the other hand, a touch screen is an electronic display tool, which the user can control using touch gestures. The touch gestures can be done by hand or with a stylus. Touchscreens have taken the information appliances' world by storm as they are now used in gaming consoles, tablets, and smartphones. They have also made inroads in the laptop and personal computers arena. They are used in ATMs and in electronic devices that are used in the medical world. It is important for all of a company’s applications to run in touch screen devices as well.
MS Powerpoint is a presentation software where one can use words, images, videos, and others to explain an idea or present views. It can be used in mouse driven screens as well as in multi-touch screens. There are different metaphors used in multi touch screens and mouse driven screens for using MS PowerPoint.
The purpose of both the multi-touch screen and the mouse- driven screen is to help the user interact with the system and accomplish certain goals. The mouse- driven screen uses the 'Point and Click type of interaction. There is a cursor on screen when a mouse is detected, and the mouse controls the cursor. Users can then use the mouse to point the cursor to actions and activate them. For example, the left click on the mouse indicates user selection of an item on the screen. If the user wants to view different pages of the presentation in MS PowerPoint, the user can use the mouse to scroll up and down by moving the mouse wheel up and down. If the user wants to drag an icon on the Power Point presentation to a different place in the slide, he or she has to left click on it and drag the mouse in such a way that the icon is moved to the place where it needs to be
On the other hand, multi-touch screens allow users to directly manipulate UI elements. They provide a real-world experience. They mainly use the interaction styles of gestures, manipulations, and stylus interactions. A gesture is a physical action using the fingers or the stylus to interact with the device. For example, tapping a menu item will allow the user to select it.
The manipulation is an interaction style where the UI reacts immediately to a particular movement. For example, a pinch on the screen is a manipulation gesture, which makes things move closer together on the screen. The pen/stylus mode of interaction, on the other hand, allows the user to draw and write on the screen directly, which can then be converted to software- based input (“How users interact with input devices (HTML),” n.d.).
There are various conceptual models on interaction design as it is important that the user is able to access, manipulate, and navigate through the software application. The model describes how a user will interact with the system, what the system will do, and the output will be. The multi-touch screen design is based on the Manipulating and Navigating conceptual model. This uses the users' knowledge of how they do actions in real life. Virtual software objects can also be acted upon in a similar manner. For example, just like in real life, a user can draw a line using a pen. In this regard, the stylus can be used to draw a line on the presentation displayed on the screen. The user can place objects to another place by touching the object and moving it to the desired location. This is analogical to a person picking up an object and placing it in the right place. This model can be applied to a mouse-driven screen as well where the user can select a 'line' tool from the 'Drawing' toolbar and use the mouse to draw the line from one point to another by dragging the mouse.
The 'Exploring and Browsing' model can be associated with the mouse-driven screens. If a person is searching for information within a book, he or she will scan the pages and scroll through it. Similarly in a presentation, the user will use the mouse to scroll from one slide to the next.
Both screens are also based on the 'Instruction' conceptual model where the user gives a command in the form of the selection of a task, which serves as the input for the system. The system processes the input and gives the appropriate output. For example, the user can select the task to “Print” a presentation by either using the mouse or by selecting the task via the touch screen. The system will respond appropriately (Wiley, n.d.).
Mouse-driven screens use Graphical User Interfaces (GUIs) to give commands to the system. On the other hand, multi-Touch-screens use GUIs and gestures to issue commands. Gestures need to be remembered, which is more difficult than recognizing a command from the GUI. Moreover, there is a lack of standardization of gestures among software and hardware vendors, which causes problems for user experience. It is important to use the right conceptual model and interaction design in both multi-touch screens and mouse-based screens to give users a good UI experience.
Business Productivity (2013, December). How to present using touch in PowerPoint 2013. Retrieved from
Dev Center- Windows (n.d.). How users interact with input devices. Retrieved from
Lu, H & Li, Y. (n.d.). Gesture Coder: A tool for programming multi-touch gestures by demonstration (2012). Retrieved from http://www.yangl.org/pdf/gcoder.pdf.
Wiley (n.d.). Understanding and conceptualizing interaction. Retrieved from http://www.wiley.com/college/preece/0471492787/sample_chapters/ch02.pdf.