• RoboCup
  • Vacants
  • SitLog
  • Previous Proyects
  • Subscribe to me on YouTube Google+

    Reasoning with Preferences in Service Robots

    Service Robots should be able to reason about preferences when assisting people in common daily tasks. This functionality is useful, for instance, to respond to action directives with a definite referent that conflicts with the user’s interest or wellbeing or when commands are underspecified. Preferences are defeasible knowledge as they can change with time or context, and should be stored in a non-monotonic knowledge-base system, capable of expressing incomplete knowledge and update dynamically defaults and exceptions, and handle multiple extensions. Non-monotonicity is handled using a generalize extension of the Principle of Specificity, which states that in case of knowledge conflict the most specific proposition should be preferred. Reasoning about preferences is used on demand through conversational protocols that are generic and domain independent.


    We show a demonstration scenario in which Golem-III assists a human user returning home, such that the user's preferences are taken into account.