About Me

I am a PhD candidate in Deep Learning and Robotics at Tel-Aviv University, working on Deep Learning, Computer Vision, Robotics, and Human-Robot Interaction. I work in the Robotics Lab under the supervision of Dr. Avishai Sintov. Before entering TAU, I worked with Prof. Michael Werman at The Hebrew University of Jerusalem on Medical Imaging Processing and Generative Adversarial Networks for Medical Images.

I received my B.Sc. and M.Sc. in Electronic Engineering from Ariel University under Prof. Yosef Pinhasi. I worked at the Homeland Security Laboratory on Ministry of Defense (MAFAT) research. My research field was Image and Signal Processing and Estimation Techniques.

My research interests include Deep Learning, Computer Vision, Robotics, Human-Robot Interaction (HRI), and Human-Robot Collaboration (HRC).

Research

Recognition of Dynamic Hand Gestures in Long Distance using a Web-Camera for Robot Guidance

Research Thumbnail
  • In this paper, we propose a model for recognizing dynamic gestures from a long distance of up to 20 meters. The model integrates the SlowFast and Transformer architectures (SFT) to effectively process and classify complex gestures se- sequences captured in video frames. SFT demonstrates superior performance over existing models.
  • Eran Bamani, Eden Nissinman and Avishai Sintov
  • The Paper was accepted to the International Conference on Robotics and Automation 40. Recognition of Dynamic Hand Gestures in Long Distance using a Web-Camera for Robot Guidance .

A Diffusion-based Data Generator for Training Object Recognition Models in Ultra-Range Distance

Research Thumbnail
  • In this paper, we propose the Diffusion in Ultra-Range (DUR) framework, utilizing a Diffusion model to generate labeled images of distant objects in various scenes. DUR trains a URGR model on directive gestures, showing superior fidelity and recognition rates compared to other models. Importantly, training a DUR model on limited real data and using it to generate synthetic data for URGR training outperforms direct real data training. We demonstrate its effectiveness in guiding a ground robot with gesture commands.
  • Eran Bamani, Eden Nissinman, Inbar Meir, Lisa Koenigsberg and Avishai Sintov
  • The Paper is under review. A Diffusion-based Data Generator for Training Object Recognition Models in Ultra-Range Distance

Ultra-Range Gesture Recognition using a Web-Camera in Human-Robot Interaction

Research Thumbnail
  • In this paper, we address and explore the Ultra-RangeGesture Recognition (URGR) problem and aim for an effective distance of up to 25 meters. We propose a data-based approach that does not require depth information but solely a simple RGB camera. A pioneering aspect of our research involved the development of novel deep-learning architectures. Specifically, HQ-Net was designed to enhance image quality, while GVIT was tailored for the recognition of human gestures.
  • Eran Bamani, Eden Nissinman, Inbar Meir, Lisa Koenigsberg and Avishai Sintov
  • The Paper was accepted for Elsevier ScienceDirect Engineering Applications of Artificial Intelligence 2024. Ultra-Range Gesture Recognition using a Web-Camera in Human-Robot Interaction

Flip-U-Net for In-Hand Object Recognition Using a Force-Myography Device

Research Thumbnail
  • In this paper, we propose a novel Deep Neural-Network architecture for in-hand Object recognition using a wearable Force-Myography Device. The device is based on Force-Myography (FMG) where simple and affordable force sensors measure perturbations of forearm muscles. we show that the proposed network can classify objects grasped by multiple new users without additional training efforts
  • Eran Bamani, Nadav Kahanowich, Inbar Ben-David and Avishai Sintov
  • The Paper was accepted to the International Conference on Robotics and Automation and the Israeli Conference on Robotics. In-Hand Object Recognition Using a Force-Myography Device

Recognition and Estimation of Human Finger Pointing with an RGB Camera for Robot Directive

Research Thumbnail
  • In this paper, we explore the learning of models for robots to understand pointing directives in various indoor and outdoor environments solely based on a single RGB camera. A novel framework is proposed which includes a designated model termed PointingNet. PointingNet recognizes the occurrence of pointing followed by approximating the position and direction of the index finger.
  • Eran Bamani, Eden Nissinman, Lisa Koenigsberg, Inbar Meir, Yoav Matalon and Avishai Sintov
  • The Paper is under review. Recognition and Estimation of Human Finger Pointing with an RGB Camera for Robot Directive

Open-Sourcing Generative Models for Data-driven In advance Robot Simulations

Research Thumbnail
  • In this paper, we propose to disseminate a generative model rather than actual recorded data. We propose to use a limited amount of real data on a robot to train a Generative Adversarial Network (GAN). We show on two robotic systems that training a regression model using generated synthetic data provides transition accuracy at least as good as real data. Such model could be open-sourced along with the hardware to provide easy and rapid access to research platforms.
  • Eran Bamani, Anton Gurevich, Osher Azulay and Avishai Sintov
  • The Paper was accepted for NeurIPS Data-centric AI 2021. Open-Sourcing Generative Models for Data-driven Robot Simulations, Oral

Learning a Data-Efficient Model for a Single Agent in Homogeneous Multi-Agent Systems

  • In this paper, we present a novel real-to-sim-to-real framework to bridge the reality gap for homogeneous multi-agent systems. First, we propose a novel deep neural-network architecture termed Convolutional-Recurrent Network (CR-Net) to simulate agents.
  • Eran Bamani, Anton Gurevich,and Avishai Sintov
  • The Paper was accepted for Springer Neural Computing and Applications 2023. Learning a Data-Efficient Model for a Single Agent in Homogeneous Multi-Agent Systems

Robust Multi-User In-Hand Object Recognition in Human-Robot Collaboration Using a Wearable Force-Myography Device

  • In this paper, we explore the use of a wearable device to non-visually recognize objects within the human hand in various possible grasps. The device is based on Force-Myography (FMG) where simple and affordable force sensors measure perturbations of forearm muscles. We propose a novel Deep Neural-Network architecture termed Flip-U-Net inspired by the familiar U-Net architecture Used for image segmentation.
  • Eran Bamani, Nadav D. Kahanowich, Inbar Ben-David and Avishai Sintov
  • The Paper was accepted for IEEE robotics and automation letters 2021. Robust Multi-User In-Hand Object Recognition in Human-Robot Collaboration Using a Wearable Force-Myography

Scaled Modeling and Measurement for Studying Radio Wave Propagation in Tunnels

  • This work is based on the ray-tracing approach, which is useful for structures where the dimensions are orders of magnitude larger than the transmission wavelength. Using image theory, we utilized a multi-ray model to reveal non-dimensional parameters, enabling measurements in down-scaled experiments.
  • Jacob Gerasimov, Nezah Balal, Eran Bamani, Gad A. Pinhasi and Yosef Pinhasi
  • The Paper was accepted for MDPI Electronics 2020. Scaled Modeling and Measurement for Studying Radio Wave Propagation in Tunnels

Study Of Human Body Effect On Wireless Indoor Communication


  • Our reaearch presents signal strength measurements, analysis, and prediction models for indoors, outdoors and near human body scenarios. The measurements were conducted by using a continuous wave transmitter and receiver antenna pair at 0.5GHz.
  • Eran Bamani and Gad A. Pinhasi.
  • The Paper was accepted for Israeli - Russian Bi-National Workshop 2019, STUDY OF HUMAN BODY EFFECT ON WIRELESS INDOOR COMMUNICATION

Projects

Education Background

  • 2021 - Present, Tel-Aviv University,
    PhD Student in Deep Learning and Robotics, ISF's Fellow
  • 2019 - 2020, The Hebrew University of Jerusalem,
    PhD Student in Deep Learning and Computer Vision


  • 2013 - 2019, Ariel University,
    B.Sc. and M.Sc. degree in Electronic Engineering, GPA 92/100

Awards

  • Outstanding Research Achievement – ME Graduate Research Award (PhD) , 2023
  • Israel Innovation Authority (IIA) for HRI prize, 2022
  • Israel Science Foundation (ISF) prize, 2021
  • Ministry of Defense (MAFAT) prize, 2017, 2018
  • Dean's Fellowship, 2014, 2015

Skills

  • Deep Learning frameworks: Pytorch, TensorFlow, Keras and Theano
  • APIs and Libraries: PyCharm, Spyder, NVidia CUDA, OpenGL, OpenCV
  • Programming Languages: Python, C/C++, Java and MATLAB
  • Experienced with developing new machine learning techniques
  • Medical CAD: 3D-Slicer and RadiAnt
  • Robotics: ROS and RVW