I am a Ph.D. graduate from Tufts University. I studied Human-Robot Interaction to explore how AR technology can help improve the interaction and communication between humans and robots 🙂
-
🔭 I’m currently working on a Human-robot Interaction study that investigates how users interact with a robot having access to an AR-device that visualizes a robot's sensory, cognitive decision, diagnostic, safety, and activity data. Specifically, I want to explore, “What types of robotic information do users want or rather see in AR when completing a task using a robot?”, “Do users have more confidence in completing a task with the aid of an AR-device with a robot?”, “Do users with access to an AR-Device complete robotic tasks more quicker than users without an AR-device?” To answer these questions, I am hosting a study within out lab where recruited participants in groups of two complete robotic-tasks that differ in types of robotic information. Measurements include total time completion, accuracy, and subjective confidence rating.
-
🤔 I’m looking for help with finding ways to integrate gesture recognition for a more natural HRI experience. I want to explore EMG devices that recognize common human gestures for enhanced command signals with robots. Integrated EMG devices with AR head mounted devices that enables eye gaze capabilities can enable multi-robot interactions.
-
📫 How to reach me: Email [email protected]
-
😄 Pronouns: He/His/Him
-
👀 Be sure to checkout my mini [AR demos]