lunduniversity.lu.se

Computer Science

Faculty of Engineering, LTH

Denna sida på svenska This page in English

Elin's open projects

 

General information

My research here at the group for Robotics and Semantic Systems includes issues of Human-Robot Interaction and robotic (knowledge) representations suitable for Human-Robot Communication.

Previously, I have been working on a generic representation of indoor environments that would allow a mobile robot to build a customised, specific representation during an interactive guided tour, given by a human user. The overall work, published in my thesis "Human-Robot Interaction and Mapping with a Service Robot: Human Augmented Mapping", incorporated both aspects of robotic mapping, localisation and navigation, as well as user studies investigating aspects around the scenario of a "guided tour". 

However, currently my research focus shifts towards industrial robots and the possibilities of transferring results from service robot related Human-Robot Interaction research to this very different world. 

Hence, if you are interested in any of these areas or have an idea for a project that I seem to be a suitable supervisor for, do not hesitate to contact me! 

If you do not have your own idea but a general interest in the stuff I am working with, you may find something in the list below! Please assume these suggestions as no more than that: suggestions. A formal specification of a project would be done once we have found a starting point for a discussion! 

A very important thing you should be aware of when contacting me regarding these suggestions: Here I am interested in rather research related projects. So, if you are interested in poking into some odd area, and if you are willing to accept "no, this is not the way this should be done" or "whoops, this is an interesting new question, never thought of that" as a result, you are very welcome to do so. If you are, on the other hand, more interested in building a system that would gain interest from industry, you are still welcome to contact me as a potential academic supervisor, but then there should be a topic available other than those listed here.

Specific project sketches:

  • High-level Programming of Industrial Robots 
    Recently, we have conducted a user study with the dual-arm robot YuMi (published in "Simplified Programming of Re-usable Skills on a Safe Industrial Robot"). The study aimed at investigating, how much a non-expert (in terms of robot programming experience) would gain by re-using previously created so called skills. However, we had to develop a graphical user interface as a tool to support the study, and to some extent there are certainly aspects of how well the tool supported the users in what they were supposed to do, which could be interesting in itself to see, HOW our users did what they did.
    From the study we have quite an amount of video data which could be analysed further (beyond our own first analysis) regarding different aspects (list below). Such an analysis would require to first annotate the video material, and then, depending on the task at hand, use a suitable approach to machine learning, knowledge representation and / or reasoning, to actually make something useful out of the data. Project idea sketches could be:

    • Finding common strategies in re-using program parts and skills. It would be interesting to see, which strategies for "updating" of specific instructions were applied and which implications this has for our assumptions of the usefulness of re-usable skills. This is a very high-level analysis of the material, but it might be possible to use a prototype for an interaction monitor (Felip Martí Carrillo, "Monitoring and Managing Interaction Patterns in HRI", MSc thesis, LU, 2015) to find certain features in the user behaviour, which could be used for a further analysis based on Bayesian techniques.

    • Investigating patterns in instructing the robot. Similar to the previous suggestion, but more specific - here, the idea is to look even more at HOW our users instructed the robot, i.e., how would they order their actions (which actions do they actually perform?) to create a specific type of instruction for the robot. E.g., are there patterns in the behaviour of the user corresponding to the type of instruction that is to be put in the sequence of the program? Also this investigation might be supported through the above mentioned prototype, and would be based on Bayesian techniques to find potential patterns (a respective article is to be published shortly, contact me if you want to know more).

    • Reasoning on user intentions, preventing errors in coordinate frame references. When programming industrial robots, it is tricky to keep track of all involved coordinate systems. Unfortunately, there are cases, where it is not possible to hide all of this nitty-gritty business from the user (which would certainly be nice, but maybe not practical of effective). Hence, the user should be guided through the cliffs by the system. We noted in our study, that there is a potential for rather basic improvements in our tool and the underlying handling that might entail quite some effect. OBS: this project would actually involve not only finding suitable techniques and points in time in the interaction (from observing the study material) for intervention through the system, but also some (implementation) work in the system (C# and Rapid - ABB's robot control language), as well as handling basic reasoning processes as to what intention a user might have. E.g., if you have programmed the robot arm to pick up a work piece A1 at point P and place it on top of work piece B1 in point Q, then you might want a following execution that manages to find work piece A2 in point P' but still place A2 on top of B2 in Q, and not with the same relative change of position from P to P', i.e., in Q', somewhere next to B2, but not on top. If you forgot to instruct this switch in coordinate references (A1 belongs to P, while A2 belongs to P', while B1 and B2 both belong to Q) between picking and placing, your program will do "weird things", but these could be prevented by asking. This project aims at building the basis for "asking the right question".


    • Your own ideas...: Industrial robots come usually with their own software to program them, which is in many quite tedious to handle. In the group for Robotics and Semantic Systems we are looking ways to make the programming process for industrial robots more accessible to non-expert (wrt robot programming) operators. Approaches used are Programming by Demonstration / Kinaesthetic teaching and Natural Language based programming, among others. If you are interested in a project in these (or related) areas, feel free to contact me. 

 

  • Active logic in (Interactive) Robotic Mapping
    A recent MSc project (not yet published) investigated the applicability of active logic to robot navigation. An idea for future work was to investigate more deeply in how far the actually applied environment representation (graph structure) could be exploited for the respective planning. In other previous work, a graph structure for indoor environments that could be applicable here was developed, which was meant to be used in an interactive mapping process (Human Augmented Mapping). This process builds up a map according to one user's understanding, and there might be conflicts in maps built for and by different users - which could be something to handle with active logic based reasoning.
    This project would hence lie at the interface of these two areas, and might involve some work with a mobile robot running under ROS (ros.org). It might also require some work in terms of transferring the original environment representation from an older system into the current work environment (i.e., into the ROS world) - so you should not be afraid to handle and / or produce C++ code! This project suggestion is probably the least specific and very open to your own suggestions on what could be done.