2016 23◦ Encontro Português de Computação Gráfica e Interação (EPCGI) A Visualisation and Simulation Framework for Local and Remote HRI Experimentation André Gradil and João Filipe Ferreira Institute of Systems and Robotics (ISR) Dept. of Electrical & Computer Eng. University of Coimbra Pinhal de Marrocos, Polo II 3030-290 COIMBRA, Portugal Abstract—In this text, we will present work on the design and development of a ROS-based (Robot Operating System1 ) remote 3D visualisation, control and simulation framework. This architecture has the purpose of extending the usability of a system devised in previous work by this research team during the CASIR (Coordinated Attention for Social Interaction with Robots) project. The proposed solution was implemented using ROS, and designed to attend the needs of two user groups – local and remote users and developers. The framework consists of: (1) a fully functional simulator integrated with the ROS Fig. 1: Desired features for most contemporary robotic development frame- environment, including a faithful representation of a robotic works. platform, a human model with animation capabilities and enough features for enacting human robot interaction scenarios, and a virtual experimental setup with similar features as the real external variables that may influence their outcome, especially laboratory workspace; (2) a fully functional and intuitive user in human-robot interaction (HRI) applications, which depend interface for monitoring and development; (3) a remote robotic critically on human subject availability and for which exact laboratory that can connect remote users to the framework via a web browser. The proposed solution was thoroughly and repetition is impossible precisely due to this human factor, systematically tested under operational conditions, so as to assess is a definite advantage. Additionally, there is often a need to its qualities in terms of features, ease-of-use and performance. open the project to the broader research community, or simply Finally, conclusions concerning the success and potential of this give the development team access from anywhere outside the research and development effort are drawn, and the foundations laboratory. To meet this demand, a recent trend has been the for future work will be proposed. Index Terms—Visualisation, Simulation, Remote, User Inter- development of remote robotic laboratories [2]. On the other face, ROS, Gazebo, Framework. hand, the increasing complexity of robotic systems, namely resulting from the number of modules and functionalities it I. I NTRODUCTION comprises, can overwhelm a developer or user when trying Robots often are too big to transport, too expensive to to monitor its operation, and therefore having all of the data replicate, or they may simply not be available to a researcher organized in a neat and clear fashion is also paramount. or developer at a convenient moment in time. Fortunately, The combined set of desired features resulting from this de- with the increase of computational power, now more than mand and its relationship with potential user types is depicted ever, simulation and remote access save time and resources in Fig. 1. The overall objective of the work presented in this (both physical and budget-related), increasing the productivity text was to endow the robotic system developed during the of a research team and allowing the community to seamlessly FCT-funded project CASIR, devoted to studying the effect of work on the same framework. There are several advantages in artificial multisensory attention in human-robot interaction2 , robotic simulation, the most important of which the capability with these features, as a follow-up on future work planned to test new algorithms and routines, reproduce and repeat in [3] see Fig. 2. This system is supported by the IMPEP experiments, generate data under different conditions, neuro- infrastructure (acronym for Integrated Multimodal Perception evolve robots and benchmark any of the robot characteristics, Experimental platform) – see Fig. 33 More specifically, the without the risk of damaging the real robot [1]. In fact, work presented in this text had the following main goals: (1) having the possibility to repeat complex experiments without the development of IMPEP hardware and simulator access to local users, with support of a intuitive local GUI; (2) providing 1 In spite of its name, ROS is not an actual operating system in the traditional sense of process management and scheduling. 2 FCT Contract PTDC/EEI-AUT/3010/2012, which ran from 15-04-2013 until 31-07-2015. The motivations for this work can be found in [4], while conceptual and implementation details are reported in [5]. 978-1-5090-5387-2/16/$31.00 2016 IEEE 3 For more information about this platform please refer to [3], [6] . 2016 23◦ Encontro Português de Computação Gráfica e Interação (EPCGI) Fig. 2: CASIR-IMPEP system architecture overview [3] only the bottom part of this diagram was originally fully implemented during the duration of the CASIR project, while the top part was developed as an expansion in the scope of the work presented in this text. a remote lab and Construct Sim has no GUI nor hardware access. In terms of availability, while the PR2 and iCub projects have their features freely accessible, hardware can only be accessed via purchase, which in both cases is rather expensive. Construct Sim has several payment options, but does not make hardware available. Finally, for Care-O-Bot the price of every module is provided by the company on request. The contributions of this work, represented in Fig. 4, result- ing of the implementation of an integrated framework boasting the features presented in Fig. 1, consist of providing the full feature set with the widest availability possible. This will allow the research team to access and develop the attention middle- ware both locally and remotely, and also make a demonstrator of the CASIR framework available to the wider scientific Fig. 3: The Integrated Multimodal Perception Experimental Platform [3], community. Unlike related work, the framework described in including actuators and respective degrees of freedom, and mounted sensors. this paper will be developed so as to provide all the features of Fig. 1 as freely available, and, in the case of the remote lab access by a user external to the local research team, with access to remote users through a remote robotic lab. reservation of timeslots, all the time ensuring system and II. R ELATED W ORK hardware security. As the effort of applying a systematic approach to meeting III. I MPLEMENTATION the demand of implementing features such as those presented in Fig. 1 is a recent trend, a handful of related works exists – A. ROS framework for the CASIR-IMPEP platform these will be described in the following text. ROS is a flexible framework for writing modular robot The Care-O-Bot Research project [7] has a similar architec- software, capable of creating complex and robust behaviour ture to the CASIR framework; however, it deals with a differ- in different types of robotic platforms. The ROS framework ent application scope via a mobile manipulation platform. The involves several core concepts, such as packages, nodes, iCub simulator was created to complement the iCub project. It topics, services and messages – please see [10] and [11] is a very specific simulator with an unique architecture, it uses for more information. ROS is both modular and language- YARP (Yet Another Robot Platform [8]) instead of ROS and independent – in other words, users can create nodes in C++, a network wrapper for remote access. Another project, “The Python, Octave and Lisp without losing the possibility of Construct Sim” [9], consists of a cloud based tool for remote communication between them if the messaging interface is robotic simulation. It has a very limited free user experience, maintained. both in simulation time and in computational resources, so in Virtual simulation is one of the most widest accepted order to properly simulate a scenario one has to resort to the recent technologies in robot development. There are numerous paid services. software tools used for simulation with big diversity in features The PR2 and Care-O-Bot were found to possess all of the (supporting a variety of robotic middleware, available sensors desired features displayed on Fig 1, while the iCub lacks and actuators, and compatible with several types of robots) 2016 23◦ Encontro Português de Computação Gráfica e Interação (EPCGI) Fig. 4: Conceptual diagram for the IMPEP ROS framework for remote 3D visualisation, control and simulation. The modules in orange refer to the contributions of the work presented herewith, namely the simulator represented by the impep_simulation, the hardware access that is not only the IMPEP but also it’s connection through the common driver API, GUI that consists in a rqt-based software and finally the remote lab supported by the CASIR-IMPEP web service. and also diversity in infrastructure (code quality, technical and rqt, a developer can build his/her own perspective from plugins community support). According to [12], currently there are of all the existing GUI tools in ROS, namely image viewer, about 40 simulation tools used by the scientific community. terminal, 2D plot, node and package graphs, pose viewer and However, since this work follows the CASIR project which is even Rviz itself [10]. If the available plugins are not suitable supported by ROS, thereby narrowing the universe of devel- for the needs of a project, the developer can either edit an opment frameworks of interest to Gazebo [13], MORSE [14], existing plugin or even create his/her own plugin (either in V-Rep [15] and Webots [16]. Comparing these frameworks C++ or Python). in terms of features, Gazebo and Webots stand out among the Remote experimental labs allow remotely sharing robot group; however Gazebo is more interesting in terms of support middleware infrastructures in a modular way with the broader infrastructure. Moreover, only Gazebo provides the percentage scientific community, making it easier to compare and con- of coverage from function and branch testing (52.9% and tribute to the research of others. Many robotic researchers have 44.5% respectively) as seen in the Gazebo website [13], this resorted to web technologies for remote robot experimentation, means that 52.9% of functions (or subroutines) in the program data collection and HRI studies. There are examples of remote were called in tests, and 44.5of branches were executed. access and control of robots from as early as 1995, in the To build the models, several 3D modelling tools were com- case of [17]. The arrival of new web technologies such as pared, namely Maya, 3ds Max and Blender. These solutions HTML5 makes it possible for developers to create appealing are very similar in features; however, due to the simplicity of and sophisticated interfaces. With the use of protocols such the modelling demands of the work reported in this paper, and as rosbridge, the communication between a web browser and without the use of complex animations, Blender was deemed ROS can be made through data messages contained in JSON to be the most suitable solution. [18]. Besides displaying ROS information in the form of Applying HMI to robotics is as important as the system images, we also need to transmit them over rosbridge – to itself – it is critical that the user possesses and is familiar with this end, the ROS package named web video server was used. the right tools to work with the system. In order to organise all Within this package there are two streaming options for the of this information and give the desired control to the user, the developers to use. The first option is based on the deprecated graphic user interface must be designed in order to be simple package mjpeg server, and consists in converting the video and intuitive. In recent ROS distributions there is a tool named stream from the desired ROS topic into a mjpeg stream (a rqt that is basically a framework for plugin development. In sequence of jpeg images), this stream can then be embedded 2016 23◦ Encontro Português de Computação Gráfica e Interação (EPCGI) into any HTML <img> tag. The second option consists in coding the video with the VP8 codec [19]. The expected outcome of this work was a unified ROS- supported framework designed so as to attain the objectives laid down in section I, allowing the CASIR attention middle- ware described in [5] to be used within the context defined by those objectives and the use of the IMPEP platform. Additionally, it is a desired property that this framework be easily adaptable to conform with any robotic head with some or all of the same characteristics as IMPEP, so as to be used with any robotic platform with innate multisensory attention capabilities. In this system, we can have either the simulated or the real version running at once, both of them publishing sensor Fig. 5: IMPEP model packages for simulation. information to the same ROS topics (a concept represented by the Common Driver API module in Fig 4). The published topics can be subscribed by the attention middleware nodes or seen directly by the remote and local users through the respective GUIs. Commands, on the other hand, follow almost the inverse path, the only difference being the non-existence of a direct connection between the GUIs themselves and the physical, as well as virtual, actuators. Manual control of both versions of the robot can be made through a node in the attention middleware using terminal commands, which can be sent within the local GUI (see section III-D). B. Implementation details for the Gazebo-based simulation package Fig. 6: IMPEP virtual model evolution. Model (1) was the pre-existing, preliminary IMPEP model. Model (2) is the upgraded physical model of Three packages were developed in order to build IMPEP, completely to scale in terms of mass and dimensions. Finally, (3) a complete robot model that is fully compatible with represents the final model, with the collision mesh and joint referentials. ROS: impep_gazebo, impep_controller and impep_description – see Fig. 5. The main package, impep_gazebo, includes the world file, the avatar scripts Gazebo-ROS plugin named libgazebo_ros_camera.so and the ROS launch file. The impep_description, attached to both right and left camera lens models. This plugin package is responsible for the robot model itself and contains is responsible for the publication of camera data to a rostopic the 3D meshes of each individual part (modelled using specified in its parameter definition. Additionally, the effect Blender) which will be the links of our robot. Using the of Gaussian noise was modelled in order to simulate residual meshes we can build the URDF (Unified Robot Description imperfections intrinsic to every real camera. Format) model, which is an XML format describing the links The depth camera, the Microsoft Kinect V1 RGB-D sen- and joints of the robot, defining the geometry, position and sor, already possesses a Microsoft Kinect 3D model na- collision mesh of each 3D component, and consequently tively available in Gazebo that follows the body dimen- resulting in models such as represented in Fig. 6. Finally, sions of a real Kinect; however, the remainder of the pa- impep_controller includes the actuator models, rameters had to be inserted into the model by hand. For parameters and publishers. the simulated depth camera to communicate with ROS, the 1) IMPEP simulation – sensors: The IMPEP has three libgazebo_ros_openni_kinect.so plugin was used, visual sensors: two RGB cameras, and a Microsoft Kinect allowing us to define the camera namespace and topics. sensor. The RGB stereovision set-up, mounted so as to allow In order to implement a virtual version of this feature, we pan, tilt and version using IMPEPs actuators, consists of a pair were forced to restrict the range of motion in certain joints; of Guppy F-036 [20]. These were modelled as faithfully as as this relates also to the virtual actuators we will explain the possible in the URDF IMPEP model, including their physical specifics of this implementation in the next section. characteristics (e.g. mass and body dimensions, the latter 2) IMPEP simulation – actuators: The IMPEP includes also needing to match with the corresponding Blender model different types of DC motors – two PMA-11A-100-01- characteristics) and technical specifications (e.g. frame rate, E500ML motors (one for pan one for tilt) and two PMA-5A- resolution and bit depth). 80-01-E512ML motors (one for each camera axis) all from In order to create a virtual camera with these specifications, Harmonic Drive (further information about the motors in [21]). a Gazebo sensor with the type ”camera” was added and a The differentiation between fixed and revolute joints will 2016 23◦ Encontro Português de Computação Gráfica e Interação (EPCGI) result from the low-level foundation implementing the virtual of detail for each data visualisation, and all of this allowing actuators according to the technical specifications of each mo- the greatest degree of on-the-fly reconfigurability possible. tor. The implementation of end of movement sensors consists For development and debugging purposes, the convenience of in creating an upper and lower movement limit in the revolute not having to change windows in the Desktop to access text joints, therefore emulating the function of the kinaesthetic terminals should be addressed. Therefore, the GUI dashboard sensors of the real IMPEP. With these restrictions in place, was configured so as to allow the display of text terminals in the virtual IMPEP will have the same range of motion as the embedded frames in the interface. real one in every moving joint. In addition to movement, effort A GUI layout implementing these features is presented and velocity limits were also implemented, not only to emulate in Fig. 8. The plugin used for 2D visualisation is called the safety mechanisms of the real IMPEP, but also to further rqt image view [10] – it is an rqt version of ROS’s image view approximate the behaviour between both versions of the robot. [10], in which the system uses image transport to provide With all the limits and joint parameters defined, the classes and nodes capable of transmitting images in arbitrary impep_controller ROS package was developed using the over-the-wire representations; however they have no depen- libgazebo_ros_control.so plugin in order to allow dencies between them. With this plugin, the developer can communication between Gazebo and ROS, similarly to the abstract from the complexity of communication, seeing only camera plugins. This package is responsible for numerous sensor msg/image type messages. Alas, image view is not important tasks, namely implementing PID parameters, deal- very user friendly, since the desired topic must be selected by ing with publishing joint states, and converting them to TF specifying it when running the tool in a terminal. Fortunately, transforms for rviz and other ROS tools. the rqt version sidesteps this issue by adding a dropdown 3) Environmental simulation: In Fig. 7 we have a direct menu feature showing all of the sensor msg/image messages comparison between the work area of the simulated and real available. Two additional interesting features of this plugin IMPEP. Some key variables like distance to the table, table-top are save image and topic refresh buttons (relevant in case new and experimental object colour were approximated as much as publisher nodes are launched). A third feature of the GUI is possible in the simulated environment. The rest of simulated the ability to embed a terminal in an interface frame, via the laboratory was populated with roughly the same kind of static Python GUI plugin rqt shell, which supports a fully functional objects (e.g. tables and bookshelves); some additional objects embedded XTerm [10]. An improvement to the terminal plugin in the room were purposely modelled as being red, so as to was made, allowing it to display two windows in the same add perceptually salient entities, which can be used as potential space (with the use of tabs). Finally, we implemented a user- distractors in attention studies [22]. friendly package launcher using an experimental plugin named 4) Avatar and interaction simulation: In a preliminary an- rqt launch, allowing the user, among other things, to run and imated scene of a simple walking skeleton controlling a male stop selected launch files (and individual nodes from the active 3D model moving in a circular trajectory was implemented, launch file) chosen via a dropdown menu. thus simulating a male subject walking in front of the IMPEP Using the configuration file .perspective, the user can set up. This was implemented in the human model XML run the rqt interface in any computer with a ROS distribution file itself and then included in the room_only.world file, version equal to or above Indigo to be fully functional. We thus building the complete world where the IMPEP will be were, therefore, able to meet the important requirement of inserted. More animations will be created in future work taking separating the computational workload resulting from the at- this preliminary animation as a template, using more complex tentional middleware processing and visualisation, as depicted coding and advanced technologies. in Fig. 2. A running instantiation of the GUI is presented in Fig. 9. C. Implementation details for the rqt-based user interface In most ROS frameworks, spatial visualisation is imple- D. Implementation details for the web service supporting the mented using Rviz. However, in spite of being a very complete CASIR-IMPEP remote lab tool, using it standalone is not as simple or interactive as The web service supporting the CASIR-IMPEP remote lab required for our system. In order to capitalise on the advan- uses a client-server architecture implemented with Rosbridge. tages of Rviz while adding increased flexibility in GUI design, Additionally, since streams of image topics are to be displayed rqt rviz was used [10]. This plugin embeds Rviz into an rqt in the HTML interface, therefore requiring a sustained con- interface while keeping all of its features and functionalities; nection with the appropriate bandwidth and upload/download however, unlike the rqt 2D visualisation plugin, it still has a speeds, the Web Video Server tool was also used [10]. dependence on its ROS counterpart. The first implementation step was to set up the server side. With the abundance of visual representations required to As the laboratory has a firewalled LAN, a “tunnel” had to be monitor camera feeds or processing results from the attention created in order to grant outside access to the main project middleware (e.g. point clouds, 3D reconstructions, audio signal computer (that will be our server). After the connection was waveforms, etc.), the developed GUI must be able to display configured, it was necessary to create and configure the video the greatest variety of information possible, while maintaining stream as well using Web Video Server. This tool opens a an uncluttered dashboard so as to present a maximum level local port, and waits for incoming HTTP requests. When a 2016 23◦ Encontro Português de Computação Gráfica e Interação (EPCGI) (a) Simulated set up (b) Real set up Fig. 7: Comparison between simulated and real set-up for HRI. Fig. 10: CASIR-IMPEP remote lab, the HTML web page already connected Fig. 8: Mockup of the rqt-based interface. Each frame in the interface was to the server and streaming one topic. preconfigured to display information as follows: (1) is the main camera visualisation frame; (2) is the main processed data (e.g. 3D reconstructions, signal waveforms, etc.) visualisation frame; (3) contains two secondary frames for camera (or related data, such as depth maps, etc.) visualisation; (4) contains frames for two support terminals in tabs; (5) contains the rqt launcher with the possibility of node selection. Note that all of these frames are reconfigurable on the fly. Fig. 11: Complete dataflow diagram of the web service. The modules in yellow represent potential for future expansions. passing through the communication protocols, can be seen in Fig. 9: Instantiation of the rqt-based interface applied to the real system in Fig. 11, including possibilities of expansion. The remote lab operational conditions. is the module for which less features were implemented and with more room for improvement. Currently, only the core web service and a front-end interface of the remote lab have video stream of a ROS image topic is requested via HTTP, it been implemented. In summary, the interface allows the user subscribes to the corresponding topic and creates an instance to see topics from the main computer from anywhere with just of the video encoder. The user interface for the client side an internet connection and the HTML file itself. It also allows of the web service, on the other hand, was created using a to send commands to defined topics through a joystick feature. simple HTML file – see Fig. 10. In a lower layer, however, it has JavaScript modules that communicate with Rosbridge IV. R ESULTS AND D ISCUSSION through websockets. To create these scripts, we used as a The developed framework was tested in order to evalu- template the work by Blaha et al., 2013 [23], updated them ate its performance in full-blown operation(videos including with contemporary plugins and custom parameters so as to examples of several operational conditions of the system conform with our requirements. can be found online in https://www.youtube.com/playlist? An overview of the complete system, from server to client list=PLfGLccmDrgxsfCZcNaxXS7ztNgnMlbBKd). The core 2016 23◦ Encontro Português de Computação Gráfica e Interação (EPCGI) TABLE I: Gazebo UI vs without Gazebo UI benchmark comparison. Per- TABLE II: Local vs remote visualisation benchmark comparison, the speci- centages and memory usage are relative to the specifications of the main fications of both machines are those in section III-D computer Main Computer Benchmark CPU RAM GPU CPU RAM GPU Local Visualisation 12.51% 1444Mb 344MB UI 24.18% 1422Mb 438Mb External Visualisation 9.00% 1038Mb 144MB Without UI 12.51% 1267Mb 276Mb Visualisation Computer Benchmark Local Visualisation 2% 824MB N/A (visualisation computer idle) computational hardware supporting the framework (Fig. 2) is External Visualisation 61.30% 1217MB N/A composed by the main computer, which includes a 4 core Intel Core i7-3770 @ 3.40GHz CPU, 2 Asus GTX780 3GB DirectCU II OC GPU units, an Asrock Z77 OC Formula displaying the same topics. Table II shows the results for the motherboard with 16GB DDR3-1600Mhz RAM, a 128GB two operational scenarios. We found that in the external vi- KINGSTON SSDNow V200, and a Seagate Barracuda 1TB sualisation scenario, the CPU load decreases in 3.51% for the HDD, and also the visualisation computer, which is ex- main computer. Furthermore, a decrease 400MB of RAM used pected not to be as powerful as the main unit. To validate and 200MB of graphical memory in this computer is observed this assumption we used a virtual machine (using VMWare when comparing with the local visualisation scenario. On the software) emulating a 2 core Intel Core i5-6300HQ CPU @ other hand, analysing benchmark values for the visualisation 2.30GHz, a Gallium 0.4 on SVGA3D GPU, supported by 3GB computer, we can see that its computational load increases DDR3-2300MHz RAM and a 76.0 GB hard disk. As for the – CPU usage suffers by far the largest increase; however, we remote CASIR-IMPEP lab clients, several different machines need to take into consideration that the percentage for the main with various operating systems were used for testing purposes. computer referes to a 4 core, 8-threaded CPU with 3.4 GHz 1) Simulation and visualisation proof-of-concept: CPU and of clock frequency, while the visualisation computer only has RAM usage was measured using htop4 , while GPU usage was 2 cores and 2 threads of lower clock frequency for the exact measured using nvidia-smi5 . same computations. Conversely, memory usage has a more With further configuration of the simulator launch file, we direct comparison since it increases exactly in the amount were able to launch only the backend of Gazebo – in other it decreases for the main computer. Finally, we conducted words, the Gazebo server without the client (Desktop UI). This tests for different combinations of these conditions, where we way all of the packages, controllers, computations and topics measured the computational weight of both simulation and were active and being published in ROS, however only in visualisation running simultaneously, comparing the effects of background. Running experiments in the simulated world were the lack of UI. Performance was found to be coherent with monitored by the user through the IMPEP’s cameras and a previous results: CPU drops significantly after taking out the global simulated external camera view, giving a third-person UI and even further with remote visualisation. As for memory perspective. As can be seen in Table I, the absence of a Gazebo usage, conclusions are similar. UI reduced almost 12% of the CPU load despite the need for Testing these modules while connecting them to an extra camera with higher resolution. Effects in memory allnodes.launch, the main launch file that runs usage were found to be negligible, while GPU usage, on the every node of the attention middleware, however, brought other hand, dropped slightly, as expected, since there was no forth important conclusions about the limitations of our need to render the entire world in real-time. system. We found that the major system load comes from Exhaustive tests were also conducted to evaluate visualisa- the middleware itself. Since the CPU is already overloaded, tion performance, either running the GUI directly in the main using the current setup, and even relegating visualisation to computer or passing topics to the visualisation computer, an external computer and removing the need to fully render where they were shown using the rqt interface running in the simulated environment on Gazebo, each second passed a local ROS installation. These tests serve the purpose of in the “real world”, only 0.6 seconds were processed in the assessing how much of an added value there is in running simulator, a very important factor that needs to be taken into visualisation in a separate computer. Two operational scenarios account by future developers and users. were used: (1) local visualisation, where we use the main 2) Remote simulation and control proof-of-concept: In computer to run everything, including real sensor drivers order to benchmark network resource usage, the remote management and the visualisation of IMPEP camera topics and lab was tested through three separate internet connec- point cloud data; (2) external visualisation, where the main tions, specified in Table III. In this study, several influ- computer still runs real sensor drivers, but all visualisation is ential experimental conditions were fixed, such as selected relegated to the separate dedicated computer, described above, topic /right/camera/image_rect_color (real sys- tem), resolution (640x480 the camera’s default), stream quality 4 htop is an interactive process viewer for Unix systems. It is a text-mode at 100%. Additionally, in all experimental runs the cho- application (for console or X terminals). 5 nvidia-smi is a command line utility intended to aid in the management sen browser was Google Chrome (the most optimized for and monitoring of NVIDIA GPU devices. Web video server applications; in point 3-Latency of [10].) 2016 23◦ Encontro Português de Computação Gráfica e Interação (EPCGI) TABLE III: Internet connection benchmark. 1 - cabled connection inside the lab, 2 - wifi connection at the Electrical Engineering Department (same [2] L. Gomes and S. Bogosyan, “Current trends in remote laboratories,” building), 3 - wifi connection at home. IEEE Transactions on Industrial Electronics, vol. 56, no. 12, pp. 4744– 4756, 2009. Download Upload Ping Fps (avg.) Bandwidth (avg.) [3] P. Lanillos, J. N. Oliveira, and J. F. Ferreira, “Experimental Ethernet Cable 79.61 Mbs 96.12 Mbs 5 ms 25 12.2 Mbs Setup and Configuration for Joint Attention in CASIR,” Mobile Wireless 1 23.90 Mbs 23.70 Mbs 5 ms 25 12.2 Mbs Robotics Lab Institute of Systems and Robotics, Coimbra, Wireless 2 15.54 Mbs 17.47 Mbs 6 ms 25 12.2 Mbs Portugal, Tech. Rep. MRL-CASIR-2013-11-TR001, November 2013. [Online]. Available: http://mrl.isr.uc.pt/projects/casir/index.php?menu= 5&language=eng&tabela=geral Our system was found to be stable for all connections, [4] J. F. Ferreira and J. Dias, “Attentional Mechanisms for Socially Interac- tive Robots A Survey,” in IEEE Transactions on Autonomous Mental exhibiting an fps average of 25 and 12.2Mbs of bandwidth Development, vol. 6. IEEE, 2014, pp. 110 – 125. occupation. Note that the same test was repeated having [5] P. Lanillos, J. F. Ferreira, and J. Dias, “Designing an Artificial Attention both wireless clients running at the same time, without any System for Social Interaction,” in 2015 IEEE/RSJ International Confer- ence on Intelligent Robots and Systems (IROS). IEEE, 2015, pp. 4171 change of values in each case. The same test was performed – 4178. with the simulated environment, and in this case, mostly due [6] P. Lanillos and J. F. Ferreira, “The CASIR-IMPEP Attention Framework to including joystick publishing commands (manual remote for Social Interaction with Robots,” Mobile Robotics Lab Institute of command of the real IMPEP head is not allowed, for obvious Systems and Robotics, Coimbra, Portugal, Tech. Rep. MRL-CASIR- 2013-12-TR004, December 2013. safety reasons), average fps dropped to 22 and the bandwidth [7] F. Weihardt, “Care-o-bot-research: Providing robust robotics hardware occupation increased to 16Mbs. In any case, this does not to an open source community,” 2011, [presentation]. represent a significant change in performance. [8] G. Metta, P. Fitzpatrick, and L. Natale, “YARP: Yet Another Robot Platform,” International Journal on Advanced Robotics Systems, p. V. C ONCLUSION AND F UTURE W ORK 3(1):4348, 2006. [9] T. Construct, “The Construct - Just Simulate!” 2016, [Online; accessed Throughout this text, the overall objective and goals defined 29-February-2016]. [Online]. Available: http://www.theconstructsim. in section I were shown to be satisfactorily attained, and, com/ as demonstrated in section IV-1, performance requirements [10] OSRF, “Ros Wiki,” [Online; accessed 09-August-2016, last edit 15-September-2016 09:17:13]. [Online]. Available: http://wiki.ros.org were also satisfactorily met. Every component in the proposed [11] M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, framework acts in a synergistic fashion so as to complement R. Wheeler, and A. Y. Ng, “Ros: an open-source robot operating system,” the core nodes in a simple, modular, and optimised fashion, in ICRA workshop on open source software, vol. 3, no. 3.2. Kobe, Japan, trying in full to minimise resource usage and computational 2009, p. 5. [12] S. Ivaldi, J. Peters, V. Padois, and F. Nori, “Tools for simulating burden of the main computer. A detailed description of the humanoid robot dynamics: A survey based on user feedback,” in 2014 work presented herewith can be found in [24]. IEEE-RAS International Conference on Humanoid Robots. IEEE, 2014, Concerning future work, the Gazebo simulator will be im- pp. 842 – 849. proved in terms of simulated human actors, namely by adding [13] Open Source Robotics Foundation, “Gazebo,” 2014, [Online; accessed 12-July-2016]. [Online]. Available: http://www.gazebosim.org/ improved, more realistic models, for example by resorting to [14] LAAS-CNRS, “MORSE,” [Online; accessed 12-July-2016]. [Online]. 3D scanning and creating the skeleton and joints accordingly. Available: https://www.openrobots.org/morse/doc/stable/morse.html A suite of action scripts for the animations will also be [15] Coppelia Robotics, “V-Rep: Virtual Robot Experimentation Platform,” developed, and later on the development of a user-friendly [Online; accessed 12-July-2016]. [Online]. Available: http://www. coppeliarobotics.com/ toolkit to assist with script generation. In terms of hardware [16] Cyberbotics, “Webots - robot simulator,” [Online; accessed 12-July- inclusion, Gazebo 4 comes with Oculus Rift and Razer Hydra 2016]. [Online]. Available: https://www.cyberbotics.com/ compatibility, which would give the user a higher level of [17] A. L. Taylor and J. T. Wright, “A telerobot on the world wide web,” in immersion, and more possibilities of human/object interaction In National Conference of the Australian Robot Association, 1995. [18] C. Crick, G. Jay, B. Pitzer, and O. C. Jenkins, “Rosbridge: Ros for inside the simulator. As for the rqt visualiser, the launcher non-ros users.” section of the UI will be redesigned in order to make it [19] J. Bankoski, P. Wilkins, and Y. Xu, “Technical overview of vp8, an open even more user-friendly, especially on what concerns non- source video codec for the web.” developers, as already shown in Fig. 8. Other improvements [20] Allied Vision Technologies GmbH, “GUPPY F-036 Datasheet,” [Online; accessed 24-August-2016]. [Online]. Available: https://www. will be to develop visualisation screens for the diverse data alliedvision.com/en/products/cameras/detail/Guppy/F-036.html displays needed for attention middleware development and [21] Harmonic Drive AG, “Engineering Data DC servo Actuators monitoring (for example, different types of 3D reconstructions, PMA,” [Online; accessed 24-August-2016]. [Online]. Avail- signal waveforms, etc.). Finally, for the remote lab the most able: http://harmonicdrive.de/mage/media/catalog/category/ED PMA E 1019821 12 2015 V01 6.pdf important follow-up work will be hosting the web site in a [22] M. Kuniecki, J. Pilarczyk, and S. Wichary, “The color red attracts server and to provide a secure, PHP/SQL handshake procedure attention in an emotional context. an erp study,” Frontiers in human for access authorisation (i.e. a welcoming homepage, time slot neuroscience, vol. 9, 2015. requests to the webmasters, etc.). [23] M. Blaha, M. Krec, P. Marek, T. Nouza, and T. Lejsek, “Rosbridge web interface,” Department of Cybernetics Faculty of Electrical Engineering, Czech Technical University Technick, 166 27 Prague 6, Czech Republic, R EFERENCES Tech. Rep., May 2013. [1] V. Tikhanoff, A. Cangelosi, P. Fitzpatrick, G. Metta, L. Natale, and [24] Gradil, A. , “Development of a Remote 3D Visualisation, Control and F. Nori, “An OpenSource Simulator for Cognitive Robotics Research Simulation Framework for a Robotic Head,” [Online; accessed 07- The Prototype of the iCub Humanoid Robot Simulator,” 2008. November-2016]. [Online]. Available: http://ap.isr.uc.pt/archive/651.pdf