Happy Wednesday and welcome to read our 11th blogpost! From this point we have passed the inspiration and ideation phases and now starting a new phase - Implementation! This time we are talking about the modalities and user tasks but also a little bit about technology. Behind the blog scenes there’s happening a lot - we are planning the specs of the final video and finalizing the prototype. Our project is progressing and so are we working on the course content!
Interaction modalities
Modality is described as “a concentrate form of a particular communication mode” according to Bellik and Teil (1992). That mode, which is mentioned in definition, is defined as the five human senses that are sight, touch, hearing, smell and taste, which constitute the receiving information, and the multifarious ways of human expression, that are e.g. speed and gesture, which constitute the product information. For example, modalities of the sound mode are noise, music, speech and silence. Modality can be considered active if it’s used consciously by the user and passive if it’s used unconsciously. (Vidakis 2017)
Our prototype uses mostly vision modality but it would be possible to use audition and tactition from computer-human modalities. Vision as a sensing modality uses eyes from human sensors that communicates with decision maker, that are brains, and then hand as an actuator does the needed move (Sharma & Pavlonic 1998). From human-computer modalities we use simple modalities like mouse, keyboard and touchscreen (if computer screen is touchable, later application option). Later, it would be possible to use tactition’s haptic feedback through vibration component of the mobile phone if Solita would see the mobile phone application needed. Modality of audition would be also possible but because of many employees need to work silently and need to use sense of sight, we will drop out the audition option and focus on vision modality.
In general, multiple modalities tend to give more affordance to users and be used to as backup if other communication ways aren’t available or not in use - that’s why it’s better to use more modalities than only one. In our case, it’s important to keep the platform easy-to-use and simplify it as much as possible to focus on the main tasks that is help employees to search information about the projects. Interaction modalities need to be decided considering the main platform possibilities and its current interface capabilities and that’s why we have implemented to use only vision modality. This makes possible to use our prototype idea as part of the HR systems and bring it into the use instead of making separate system with features that are incompatible with existing HR systems.
Technology
Our idea is based on platform that is already in use of Solita but where will be added more functions through the software framework. It can also be considered as an application if it’s implemented separately from the HR system. We would like to see our application developed to be part of the HR project and staffing system and that way it could have more potential to be used and serve the need of project information. Just like other services and platforms at Solita, this platform would be in cloud and use cloud computing so that information would always be reachable.
Besides the cloud application service, Solita’s employees need to have input devices, that sends the digitised data to computer. Input devices, like mouse and keyboard, are connected to a computer and they send the information for computers (Computerhope.fi 2020). Of these entries, the mouse is optional but widely used because of its efficient way of working. Instead, the keyboard is almost a mandatory device unless employee is working on a touch screen computer or mobile phone.
Main user task
Our main idea behind the solution is to enable seeking and searching of information to be as convinient and easy as possible so that the employees could concentrate on their expertise and billable client work. There’s two main user tasks in our prototype – producing content and search of information. One of the user task is the producing content. This should happen when the project has started and it’s also created in HR resource system. One person that is involved in the project will create the project and provide the information of project name, contact person, roles of employees, description of project and hashtags. After creating the project card with the following information, it will go through the procedure and HR will approve it. After that project will be seen in project lists and can be searched.
Another task is the searching content. If employee would need some information for her/his project work, our assumption is that the employee of Solita would go and open platform where she/he would find the HR related system including our project information prototype. After that employee would see the main view and search needed information from registered projects (that have been put in the system already). Employee could search information with headwords that consist of for example employee name (e.g. “Silja Sillanpää”), hashtag (e.g. “node js”) or subject (e.g. “circular economy”). Employee could also filter the search results e.g. by the date and ongoing/bygone projects. After that employee could look more information about the projects and and their contact persons, employee roles, techniques and descriptions. Finally the employee could get information from the card or contact right person mentioned in project card depending on the information.
References:
Bellik, Y. & Teil, D. (1992) Définitions Terminologiques pour la communication multimodale. Proceedings of Interface Hommemachine (IHM). Available at: https://perso.limsi.fr/bellik/publications/1992_IHM_1.pdf (read 11.4.2021)
Computerhope.com. (2020). What is the difference between an input and output device? Computer Hope. Available at: https://www.computerhope.com/issues/ch001355.htm (read 12.4.2021)
Sharma, R. & Pavlonic, V. (1998). Toward Multimodal Human-Computer Interface. Proceedings of the IEEE. DOI: 10.1109/5.664275
Vidakis, N. (2017). A Multimodal Interaction Framework for Blended Learning. EAI Endorsed Transactions on Creative Technologies. Available at: https://www.researchgate.net/publication/319474523 (read 11.4.2021)
Kommentit
Lähetä kommentti