Sense-Hub Page and Forum dedicated
to IRIS (Intuitively Regulated Intel System)

Know more about IRIS project

The IRIS (Intuitively Regulated Intell System) Project seeks to foster the growth of the use of augmented reality technology as a tool for social and digital inclusion. As a neutral spokesperson for propagating and generating original content from advances in understanding of the haptic platform and augmented reality. Also promote innovation by hosting collaborative events between the technical community, application developers, industry and end users to address industry issues.
 
Through community outreach programs, end users, developers, and industry members collaborate on technical, legal, and promotional issues, coordinating industry and technology community collaboration by promoting education and administrative assistance.
Encourage the collaboration of various software and hardware developers to discuss the fundamentals of Sensory Integration Interface a sensory addition device. Lightweight, functional and practical devices to enhance the quality of life of people who have specific needs related to the difficulty of sensory decoding.
The project aims to create the foundation for providing free information on a specific development kit, accompanied by an integrated development environment. Made available by opensource projects so that external developers can better integrate with the proposed software or encourage the use of their platform.
A tool built around principles similar to those used for open source software, Open Source Hardware (OSHW) whose design can be made publicly available so that anyone can build, modify, distribute and use these artifacts.
Augmented reality with integration of elements or virtual information for real-world visualization, with tools such as artificial intelligence and machine learning.
The use of real-time, digitally processed video “enlarged” by adding tactile and auditory information created by the computer.
Existing technology improves the quality of life and social integration and autonomy of people with sensory disabilities (eg visual), the aim of the project is to popularize augmented reality
Augmented reality could be a new system for social inclusion.
The user will be able to perceive not only the world around them, but also a layer of computational information through haptic equipment.
It’s the kind of investment that, I believe, no other big tech company is making right now. Turning images into electrical signals, which are sent to the wearer’s glove or suit through electrodes. Electric microshocks, which vary in intensity, form the figures captured by cameras mounted on glasses.
 
There are some points where this technology can be widely and affordably available, the most obvious would be to bring AR to cell phone cameras, juxtaposing the spatial data that the camera can gather from a real world space with digital camera models. AR for haptic information generation. It also allows digital models to interact with the user to feel and feel these spaces with haptic gloves.
Designing models of real objects for the user to interact in the form of haptically transmitted digital environments in real time. This solution is already being used by virtual reality developers to give VR gamers a way to visualize how their own bodies moved in VR with image data.
 
Although it sounds like science fiction, the logic behind the operation of these devices is known through the study of sensory deficiencies: in the absence of sight or hearing, the brain can learn to see or hear through other senses. It is not the eyes that really see, but the brain.
Users will be able to identify obstacles, locate and identify signs and find objects whether a person reaches out to greet them or not.
 
 These are impossible tasks to do with the help of a cane or guide dog. A guide dog takes at least two years to train and costs a total of $ 45,000 to $ 60,000. And about 45 percent of school-trained dogs cannot “graduate.”
A guide dog must obey several verbal commands from its companion.
The guide dog is responsible for assisting the visually impaired to get anywhere the animal is rigorously trained to follow the following rules:
• Stand firm, always to the left or slightly ahead of your companion;
• Move in any direction only when ordered;
• Assist your companion in dealing with public transport;
• Ignore distractions such as people, other animals, smells, etc .;
• Lie quietly while your companion remains seated;
• Recognize and avoid paths with obstacles;
• Always stop at the top or foot of stairs until instructed to follow;
• Bring your companion to the elevator buttons;
• And obey your companion’s verbal commands.
In addition to the rules and skills cited, selective disobedience is essential to the guide dog, meaning he must never obey any command that could endanger his companion.
The training of a guide dog is very rigorous, arduous and gradual, lasting about 2 years.
To obtain a guide dog, the person concerned should contact specialized NGOs.
WHO appoints 75 million blind people worldwide by 2020.
Calculations point to 1.1 million blind people in Brazil and about 4 million people with serious visual problems
According to the Brazilian Council of Ophthalmology, there are more than 5.4 million people with visual impairment in our country. Already the waiting line for a guide dog has more than 2,000 applicants, but only about 70 people have the opportunity to help.
Although there are different criteria for getting a guide dog, the American Eye Dog Foundation for the Blind Inc requires the following basic requirements:
The applicant must be legally blind;
Have good physical and mental health; have attended or are attending a high school;
Being able to provide the necessary care for the animal, such as housing and food;
And want the dog for mobility purposes.
Once you have met all the initial requirements, you should contact an NGO that focuses on guide dog training, find out about the process, apply, and waitlist.
Technology will not necessarily retire guide dogs early, but complement them.
Objective to develop and enable the creation of a tool capable of tracking and interpreting the physical environment and inserting them into immersion experiences with tactile information transmitted in the hands of users or in a clothing. Users will be able to interact with their own hands in virtual reality without the need for companions or guide dogs – which was not possible until now.
The project seeks to allow a more natural form of interaction. The idea is simple: for many people, analogous to augmented reality glasses, a glove or costume would provide the user with useful information for contextualizing the place to be and for mobility safely and independently. By being able to track hand movements and insert them into the immersion experience, breaking a barrier to visually impaired people.
In addition, capturing hand gestures can facilitate augmented reality adoption for visually impaired people, allowing users to scan their surroundings for obstacles and spatial orientation marks.
They can use what they already know early on: feel the world with their own hands.
Assess if a puddle in a kitchen could be a risk of accident
Intuitively, use your hands to perceive pointed problems, even playful.
In order to be able to reach the new tool, the project seeks to develop algorithms capable of understanding the environment around the user and how they fit in, using technologies such as computer vision and machine learning. In this way, the system is able to differentiate an obstacle such as a post or hole in the sidewalk, for example. “Using multiple datasets, with feedback to the hands (and more elaborately to the body) with complementary audio information.
Using your hands to interact with the experience as an immersion is only a first step. “From this, we can make a costume that gives users tactile sensations. The developers will have countless possibilities.”

It will also be able to interpret the hand gestures of the user and allow them to use them to interact to get information with what the computer sees around them, being an important step for the future of insertion and inclusion of disabled people in society. To live a life with more Autonomy and independence are desired goals for people with disabilities. – building a new computing platform based on the interaction of computers to give new super senses to people.
 
The device has a camera attached to the frame of the user’s glasses that photographs, scans and transforms text from any surface into audio instantly.
A technology that offers independence to the visually impaired. A device that instantly photographs, scans and transforms text from any surface into audio. This happens with books, newspapers, magazines, street signs, restaurant menus, store names, cell phone messages, brochures, etc.
Capable of detecting texts in Portuguese, English and other languages, the device can have a speed control, allowing the adjustment of reading speed, choose between male and female voice; commands to pause, fast forward or rewind reading recognize objects and after recognition, relay the information discreetly into the user’s ear. Face recognition that helps the user identify the people around them. Color recognition, money bills, informs the time, date, weather and transportation information whenever the user requests it.
The system will also feature an intelligent personal assistant, software that can perform tasks or services for an individual. These tasks or services are based on user input, geolocation, and the ability to access information from a variety of online sources, such as weather and traffic conditions, news, grocery product prices, user schedules, and more. others. Examples of smart virtual assistants are Siri, Google Assistant, and Microsoft Cortana.
This is a system that accesses a database and generates a response. Which makes it easier and faster to get information for Virtual Assistant users. In addition, Virtual Assistants are able, through Artificial Intelligence, to optimize the database by recognizing more questions and answers over time.
Systems that learn more and more through customer interactions, making customer service more effective and solving user problems more easily. So these systems are constantly evolving and are always looking for a more humane way to help.
Virtual assistants make use of natural language processing to recognize a voice or gesture command and perform a valid command. This makes customer service more personalized
You can interact with an intelligent personal assistant via text, voice or gesture.
Commercially available smart virtual assistants can be classified by types that vary according to the attributes they have and can be classified according to:
Behavior
Liabilities: They present themselves to the customer only when they ask for help.
Dynamic: Introduce themselves to the customer as soon as they become active in the system, that is: They enter the site, turn on the tablet, power up the smartphone, etc.
Trigger Dynamics: Present themselves when the customer appears through their behavior to need help. For example after multiple failed attempts to provide a password for a specific page or when the client often returns the same page for no apparent reason.
Purpose
General: Assist the client by interacting with him or her about general issues, such as news being published in the media.
Specialists: Assist the customer with interactions on specific subjects, such as how to shop on a website or on financial advice.
 
Note
Proactive: They are able to observe customer behavior and at the appropriate time suggest a query to a list of additional subjects (a “learn more”) that they find interesting to the customer.
Reactive: They only answer the questions you have been asked.
 
Presentation
With Avatar: They present themselves in the form of an image, usually in the form of a human figure or a robot.
No Avatar: Do not use impersonation on images. They usually present themselves as a dialog box with messages such as “How can I help you?”
 
Communication
Sociable: The Assistant is attentive and courteous, showing some concern for the information he offers.
Indifferent: The Assistant answers the questions mechanically and indifferently at the level of information.
Integration
Integrated: They are able to access corporate information systems to provide customer information. They may also use customer-supplied data to update the information contained in that system.
Unintegrated: They are unable to access Information Systems and are therefore more limited in the information they can provide to the customer.
 
Computational vision is currently one of the most important research fields within Deep Learning. It is located at the crossroads of many academic disciplines such as Computer Science (Graphics, Algorithms, Theory, Systems, Architecture), Mathematics (Information Retrieval, Machine Learning), Engineering (Robotics, Speaking, NLP, Image Processing), Physics (Optics)), Biology (Neuroscience) and Psychology (Cognitive Science).
Computer Vision represents a relative understanding of visual environments and their contexts.
Here are some formal definitions of textbooks:
• “the construction of explicit and meaningful descriptions of physical objects from images” (Ballard & Brown, 1982)
• “computational properties of the 3D world from one or more digital images” (Trucco & Verri, 1998)
• “to make useful decisions about real physical objects and scenes based on detected images” (Sockman & Shapiro, 2001)
 
• Face Recognition: Algorithms for detecting face and recognizing it at the time of social interactions.
• Image retrieval: Google Images uses content-based queries to search for relevant images. The algorithms analyze the content in the query image and return results based on the best matched content. To answer questions for example: Where did I leave my key?
Information source for detecting traffic signals, traffic light color, lights and other visual signals.
Visual recognition tasks, such as classification, localization, and image detection, are key components of Computational vision.
Recent developments in neural networks and deep learning approaches have greatly enhanced the performance of these state-of-the-art visual recognition systems.
 
which splits entire images into pixel groupings that can be converted to tactile signals
 
1 – Image Classification
Sort images into distinct categories
Rendering an image for haptic information on clothing or gloves, you create a scan input layer of, say, 10 x 10, which feeds the first 10 x 10 pixels of the image. After passing this input, you feed the next 10 x 10 pixels by moving the scanner one pixel to the right. A technique of sliding windows.
Most image classification techniques are currently trained on ImageNet, a data set of approximately 1.2 million high-resolution training images.
3 – Object Tracking
Object Tracking refers to the process of following an object of specific interest, or multiple objects, in a particular scene. Traditionally, it has video applications and real-world interactions where observations are made after the initial detection of an object.
Semantic Segmentation
The central point of computer vision is the process of segmentation,
In particular, Semantic Segmentation attempts to semantically understand the role of each pixel in the image (for example, is it a car, a motorcycle, or some other kind of class?). For example, in the figure above, in addition to recognizing the person, the road, the cars, the trees, etc., we also need to outline the boundaries of each object.
5 – Instance Segmentation
In addition to semantic segmentation, instance segmentation targets different instances of classes, such as labeling 5 cars with 5 different colors. In classification, there is usually an image with a single object as focus and the task is to tell what that image is. But to segment instances, we need to perform much more complex tasks. We see complicated views with various overlapping objects and different origins, and not only do we classify these different objects, but we also identify their limits, differences, and relationships to each other.
Computer vision techniques can help a computer extract, analyze and understand useful information from a single or a sequence of images. There are many other advanced techniques I haven’t touched yet, including style transfer, colorization, action recognition, 3D objects, human pose estimation, and more.
The term Haptik was coined by the German psychologist Max Dessoir in 1892 in suggesting a name for academic research in the sense of “acoustics” and “optics” touch.
Gibson (1966) defined the haptic system as the “sensitivity of the individual to the world adjacent to his body through the use of his body”. Gibson and others further emphasized what Weber had accomplished in 1851: the close link between tactile perception and body movement, and tactile perception is an active exploration.

The term Haptik was coined by the German psychologist Max Dessoir in 1892 in suggesting a name for academic research in the sense of “acoustics” and “optics” touch.
Gibson (1966) defined the haptic system as the “sensitivity of the individual to the world adjacent to his body through the use of his body”. Gibson and others further emphasized what Weber had accomplished in 1851: the close link between tactile perception and body movement, and tactile perception is an active exploration.
The concept of haptic perception is related to the concept of extended physiological proprioception, whereby when a tool such as a stick is used, the perceptual experience is transferred transparently to the end of the tool.
Tactile perception depends on the forces experienced during touch. This research allows the creation of illusory “virtual” haptic forms with different perceived qualities that have clear application in haptic technology.
With the ambition to be a full body motion platform. Touch actuators can provide realistic touch on hands and fingertips. combined with a headset allows users to feel the surrounding environment.
A haptic is a wearable device that provides tactile feedback to the body.
Tactile information is decoded by specialized terminating mechanoreceptors involving the terminal sensory nerve. Histological and physiological studies have identified four types of mechanoreceptor in the skin.
• Meissner corpuscles: a fast-adapting receptor found at the margin of the papillary grooves, responsible for the thin, fluid-filled, globular mechanical sensitivity of epithelial cells surrounding the terminal nerve;
• Merkel discs: slow-fitting receptor found in the center of the papillary sulci with semi-rigid structure that transmits skin pressure to the nerve end;
• Paccini corpuscles: physiologically similar to and less numerous Meissner corpuscles, respond to rapid skin deformation but not sustained pressure, located deep to subcutaneous tissue, and have a flexible capsule sensitive to vibratory stimulation (200-300 Hz) of the skin. ;
• Ruffini terminations: Slowly adapting receptors, concentrated in the subcutaneous tissue of the skin grooves in the joints, palms and nails, capture skin stretching or arching of nails, transmitting to nerve endings, and decoded information contributes for the perception of the shape of objects.
Tactile stimulation is decoded into electrical impulses by the different morphological types of terminal receptors described and ascended by axons of peripheral dorsal root ganglia through nerve fibers of various diameters.
This platform could be extended to other sensory processing aids for an even wider group of people.
Reach a level that can “see” the differences between a face that expresses happiness, sadness, or anger, and provide an interface to translate and help teach people who deal with the autism spectrum to translate these emotions in real time using Augmented reality. Helping autists (and others facing sensory limitations such as generalized anxiety, ADHD, and schizophrenia) cope with sensory processing and simultaneous translation for understanding facial expressions and nonverbal body language
Sensory Processing Disorder is listed in the DSM-5 Psychiatric Manual as an independent neurological disorder. However, it is often identified in people with a diagnosis within the Autism Spectrum.
Environmental stimuli are quickly and intensely captured by the brain of the person sensitive to this sensory invasion. In this process, many sensations are perceived at once, in (minimal) detail, causing hypersensitivity of the senses. Stress is an aggravating factor; the more nervous the less able to tolerate input of stimuli at the same time.
Augmented Reality (AR) enables an interactive real-world experience where objects residing in the real world are “accentuated” by perceptive information created by computers, including auditory, haptic, and somatosensory. It can be constructive (adds to the natural environment) or destructive (masking the natural environment). Augmented reality is related to two widely used terms, mixed reality, and computer mediated reality.
 
 
The main value using the augmented reality technology tool is that it brings components of the digital world into the person’s perception of the real world, and not only by sensoryly disposing the information, but by integrating immersive sensations that are perceived as parts natural from an environment.

The system will be able to identify signs of stress from users such as tachycardia, tachypnea, repetitive movement, sweating to help deal with people with sensory processing disorders.
The five popularly known senses – sight, hearing, touch, smell and taste
The vestibular – balance (whose origin is in hearing)
Proprioception (posture; muscle contraction, doing activities without looking at what you do; feeling the weight of objects; feeling “in control” of your own body)
Interoception (inner sensations of hunger, thirst, sleep, full bladder, heartbeat and tiredness)
The nociception (pain sensation)
The thermoception (temperature record – cold, hot etc.)
The last five senses are as present in humans as they are unknown, for almost everything we do is natural for those who do not have a sensory processing disorder.

Mono Operation
Mono functioning in autism is the experience of one sense at a time. Most people with neurotypic development are able to walk without having to look at their feet; She knows she’s walking (proprioceptor sense). At the same time they may be talking on a cell phone and waving to a friend who has just entered her location. These are automatic actions that do not disturb its functioning.
A large group of autists, however, may be disturbed by the stimuli that come to them through the senses.
The system can automatically, depending on the context, filter out external stimuli such as hearing, noise canceling headphones or headsets, or augmented reality glasses without putting the wearer at risk.
Sensory hypersensitivity: Fight, Flight or Freeze
I help in a playful way and with contextual information to deal with sensory hyposensitivity – the constant search for stimuli
Autistic sensory hyposensitivity, as opposed to hypersensitivity, is observed when the child seems to seek stimuli by jumping, staring straight into the light, or spinning rotating objects incessantly. They are those who cannot “sit upright” in a chair; sit half lying, half sitting (hyposensitivity of sense proprioception.) The child does not run away from sensation; she pursues her incessantly. In the case of hypersensitivity, children may exhibit inconvenient or even dangerous behavior.
Haptic stimuli through clothing or gloves for sensory modulation.
Decreasing the amount of stimuli
Decreasing user stress
Anticipating many stimulus and / or stress situations
Controlling situations where sensory processing may be affected.
Example: A young autistic person with visual hypersensitivity – offer augmented reality glasses to filter out stimuli. A student with hearing sensitivity – offer headphones.
With this Sensory Integration Interface you can greatly improve the hyper / hypersensitivity of people with a sensory problem, as is often the case with autism.
 
The forum also seeks to be space for work with exoskeleton designs and wearable soft robots to provide external mechanical forces driven by voluntary muscle signals to aid the patient’s desired joint movement, bypassing objects or assisting in better movement composition and modulation, as an example haptic adjustments and guidelines to get a cup or a movement in a sport activity.
 
Combining the advantages of different structural designs. An action to coordinate more precise movements, it also integrates the design of external mechanical force with neuromuscular electrical stimulation (NMES) technology. By detecting electromyographic signals in the user’s muscles, the device will respond by applying NMES to contract the muscles and exercising external mechanical forces to assist in the desired voluntary movement of the joint. The device will also be able to connect to a mobile app designed to record training data in real time, so users can track their training and track their progress on device control and serve as a social networking platform for patients with disabilities. visual and sensory communicates online for mutual support. Being able to connect with volunteers to describe landscapes and assist with describing everyday situations like cooking
Enable the development of low-input motion capture equipment by synchronizing basic inputs such as computer vision, head and hand movement with models that could enable a cross-platform hybrid solution.

Font Resize