About Me

I am a Postdoctoral Associate at the Massachusetts Institute of Technology (MIT) in the 77 Lab, collaborating with Hermano Igo Krebs. My research focuses on developing advanced AI-powered mobile platforms for home-based rehabilitation of neurological movement and balance disorders. I hold a Ph.D. in Deep Learning and Robotics from Tel-Aviv University, where I worked in Dr. Avishai Sintov’s Robotics Lab, and earned both my B.Sc. and M.Sc. in Electronic Engineering at Ariel University under Prof. Yosef Pinhasi, where I conducted research in image and signal processing.

My research interests include Human-Robot Interaction (HRI), Deep Learning, Computer Vision, and Rehabilitation Robotics.

News

  • May 2025: New preprint: "DiG-Net: Enhancing Quality of Life" (under journal review)
  • May 2025: Joined The 77 Lab at MIT as a Postdoctoral Associate.
  • April 2025: New preprint: "Speech-to-Trajectory: Learning Human-Like Verbal Guidance for Robot Motion" (under journal review)
  • March 2025: Appointed as a reviewer for ACM Transactions on Human-Robot Interaction (THRI).
  • January 2025: Awarded the Blavatnik Cambridge Fellowship for postdoctoral research.
  • November 2024: The paper "A Diffusion-Based Data Generator for Training Object Recognition Models in Ultra-Range Distance" was accepted for publication in IEEE Robotics and Automation Letters (RAS).
  • June 2024: Paper accepted: "Ultra-Range Gesture Recognition using a Web-Camera in Human–Robot Interaction" to Engineering Applications of Artificial Intelligence (Elsevier).

Research

Speech-to-Trajectory: Learning Human-Like Verbal Guidance for Robot Motion

Research Thumbnail
  • This work presents the Directive Language Model (DLM), a novel speech-to-trajectory framework that enables robots to follow natural verbal guidance and interpret diverse user commands. DLM maps spoken language directly to executable motion trajectories without relying on predefined phrases, using semantic augmentation and diffusion policy-based trajectory generation. The model achieves robust, human-like motion execution and demonstrates superior accuracy and generalization compared to state-of-the-art methods.
  • Eran Bamani, Eden Nissinman and Avishai Sintov
  • The paper is currently under review at IEEE Robotics and Automation Letters. Speech-to-Trajectory: Learning Human-Like Verbal Guidance for Robot Motion .

DiG-Net: Enhancing Quality of Life through Hyper-Range Dynamic Gesture Recognition in Assistive Robotics

Research Thumbnail
  • This work presents DiG-Net, a novel model for assistive robotics that enables accurate recognition of dynamic hand gestures at distances up to 30 meters using only an RGB camera. Integrating depth-conditioned alignment and spatio-temporal graph modules, DiG-Net achieves 97.3% accuracy and significantly improves the real-world usability of assistive robots.
  • Eran Bamani, Eden Nissinman and Avishai Sintov
  • The paper is currently under review at the journal Computer Vision and Image Understanding. DiG-Net: Enhancing Quality of Life through Hyper-Range Dynamic Gesture Recognition in Assistive Robotics .

Recognition of Dynamic Hand Gestures in Long Distance using a Web-Camera for Robot Guidance

Research Thumbnail
  • In this paper, we propose a model for recognizing dynamic gestures from a long distance of up to 20 meters. The model integrates the SlowFast and Transformer architectures (SFT) to effectively process and classify complex gestures se- sequences captured in video frames. SFT demonstrates superior performance over existing models.
  • Eran Bamani, Eden Nissinman and Avishai Sintov
  • The Paper was accepted to the International Conference on Robotics and Automation 40. Recognition of Dynamic Hand Gestures in Long Distance using a Web-Camera for Robot Guidance .

A Diffusion-based Data Generator for Training Object Recognition Models in Ultra-Range Distance

Research Thumbnail
  • In this paper, we propose the Diffusion in Ultra-Range (DUR) framework, utilizing a Diffusion model to generate labeled images of distant objects in various scenes. DUR trains a URGR model on directive gestures, showing superior fidelity and recognition rates compared to other models. Importantly, training a DUR model on limited real data and using it to generate synthetic data for URGR training outperforms direct real data training. We demonstrate its effectiveness in guiding a ground robot with gesture commands.
  • Eran Bamani, Eden Nissinman, Inbar Meir, Lisa Koenigsberg and Avishai Sintov
  • The paper was accepted for publication in IEEE Robotics and Automation Letters. A Diffusion-based Data Generator for Training Object Recognition Models in Ultra-Range Distance

Ultra-Range Gesture Recognition using a Web-Camera in Human-Robot Interaction

Research Thumbnail
  • In this paper, we address and explore the Ultra-RangeGesture Recognition (URGR) problem and aim for an effective distance of up to 25 meters. We propose a data-based approach that does not require depth information but solely a simple RGB camera. A pioneering aspect of our research involved the development of novel deep-learning architectures. Specifically, HQ-Net was designed to enhance image quality, while GVIT was tailored for the recognition of human gestures.
  • Eran Bamani, Eden Nissinman, Inbar Meir, Lisa Koenigsberg and Avishai Sintov
  • The Paper was accepted for Elsevier ScienceDirect Engineering Applications of Artificial Intelligence 2024. Ultra-Range Gesture Recognition using a Web-Camera in Human-Robot Interaction

Flip-U-Net for In-Hand Object Recognition Using a Force-Myography Device

Research Thumbnail
  • In this paper, we propose a novel Deep Neural-Network architecture for in-hand Object recognition using a wearable Force-Myography Device. The device is based on Force-Myography (FMG) where simple and affordable force sensors measure perturbations of forearm muscles. we show that the proposed network can classify objects grasped by multiple new users without additional training efforts
  • Eran Bamani, Nadav Kahanowich, Inbar Ben-David and Avishai Sintov
  • The Paper was accepted to the International Conference on Robotics and Automation and the Israeli Conference on Robotics. In-Hand Object Recognition Using a Force-Myography Device

Recognition and Estimation of Human Finger Pointing with an RGB Camera for Robot Directive

Research Thumbnail
  • In this paper, we explore the learning of models for robots to understand pointing directives in various indoor and outdoor environments solely based on a single RGB camera. A novel framework is proposed which includes a designated model termed PointingNet. PointingNet recognizes the occurrence of pointing followed by approximating the position and direction of the index finger.
  • Eran Bamani, Eden Nissinman, Lisa Koenigsberg, Inbar Meir, Yoav Matalon and Avishai Sintov
  • The Paper is under review. Recognition and Estimation of Human Finger Pointing with an RGB Camera for Robot Directive

Open-Sourcing Generative Models for Data-driven In advance Robot Simulations

Research Thumbnail
  • In this paper, we propose to disseminate a generative model rather than actual recorded data. We propose to use a limited amount of real data on a robot to train a Generative Adversarial Network (GAN). We show on two robotic systems that training a regression model using generated synthetic data provides transition accuracy at least as good as real data. Such model could be open-sourced along with the hardware to provide easy and rapid access to research platforms.
  • Eran Bamani, Anton Gurevich, Osher Azulay and Avishai Sintov
  • The Paper was accepted for NeurIPS Data-centric AI 2021. Open-Sourcing Generative Models for Data-driven Robot Simulations, Oral

Learning a Data-Efficient Model for a Single Agent in Homogeneous Multi-Agent Systems

  • In this paper, we present a novel real-to-sim-to-real framework to bridge the reality gap for homogeneous multi-agent systems. First, we propose a novel deep neural-network architecture termed Convolutional-Recurrent Network (CR-Net) to simulate agents.
  • Eran Bamani, Anton Gurevich,and Avishai Sintov
  • The Paper was accepted for Springer Neural Computing and Applications 2023. Learning a Data-Efficient Model for a Single Agent in Homogeneous Multi-Agent Systems

Robust Multi-User In-Hand Object Recognition in Human-Robot Collaboration Using a Wearable Force-Myography Device

  • In this paper, we explore the use of a wearable device to non-visually recognize objects within the human hand in various possible grasps. The device is based on Force-Myography (FMG) where simple and affordable force sensors measure perturbations of forearm muscles. We propose a novel Deep Neural-Network architecture termed Flip-U-Net inspired by the familiar U-Net architecture Used for image segmentation.
  • Eran Bamani, Nadav D. Kahanowich, Inbar Ben-David and Avishai Sintov
  • The Paper was accepted for IEEE robotics and automation letters 2021. Robust Multi-User In-Hand Object Recognition in Human-Robot Collaboration Using a Wearable Force-Myography

Scaled Modeling and Measurement for Studying Radio Wave Propagation in Tunnels

  • This work is based on the ray-tracing approach, which is useful for structures where the dimensions are orders of magnitude larger than the transmission wavelength. Using image theory, we utilized a multi-ray model to reveal non-dimensional parameters, enabling measurements in down-scaled experiments.
  • Jacob Gerasimov, Nezah Balal, Eran Bamani, Gad A. Pinhasi and Yosef Pinhasi
  • The Paper was accepted for MDPI Electronics 2020. Scaled Modeling and Measurement for Studying Radio Wave Propagation in Tunnels

Study Of Human Body Effect On Wireless Indoor Communication


  • Our reaearch presents signal strength measurements, analysis, and prediction models for indoors, outdoors and near human body scenarios. The measurements were conducted by using a continuous wave transmitter and receiver antenna pair at 0.5GHz.
  • Eran Bamani and Gad A. Pinhasi.
  • The Paper was accepted for Israeli - Russian Bi-National Workshop 2019, STUDY OF HUMAN BODY EFFECT ON WIRELESS INDOOR COMMUNICATION

Academic Appointments

  • 2025 - Present, Massachusetts Institute of Technology (MIT), Cambridge, MA,
    Postdoctoral Associate at The 77 Lab

Education Background

  • 2021 - 2025, Tel-Aviv University,
    PhD Student in Deep Learning and Robotics, ISF's Fellow

  • 2013 - 2019, Ariel University,
    B.Sc. and M.Sc. degree in Electronic Engineering, GPA 92/100

Awards

  • Blavatnik Cambridge Fellowship - Selected as one of the Blavatnik Fellows for postdoctoral research at the University of Cambridge , 2025
  • Outstanding Research Achievement – ME Graduate Research Award (PhD) , 2023
  • Recognized for outstanding research contributions in HRI by the Israel Innovation Authority (IIA), 2022
  • Awarded research excellence recognition by the Israel Science Foundation (ISF), 2021
  • Awarded a Ministry of Defense (MAFAT) fellowship for excellence in research, 2017, 2018
  • Dean's list: second year of B.Sc. 2015
  • Dean's list and dean’s award (full-tuition scholarship): first year of B.Sc., 2014

Skills

  • Deep Learning frameworks: Pytorch, TensorFlow, JAX, HuggingFace and TorchServe
  • Development Tools and Libraries: PyCharm, VSCode, Git, Docker, NVidia CUDA, OpenGL and OpenCV
  • Programming Languages: Python, C/C++, Java, Bash and MATLAB
  • Experience designing novel neural network architectures and optimization methods
  • Medical CAD: 3D-Slicer, SimpleITK and RadiAnt
  • Robotics: ROS, Isaac Sim / Lab and RVW