top of page

prototyping

PROTOTYPE 1

A first solidworks model was developed. It consisted of two arms that were connected with a cylindrical pin which offered a 360 movement on one axis. Two inner arms of different lengths were designed to test on. On the outer arm where the sensor would be placed, three different position slots were also designed in order to offer different angles of the sensor rest.

 

This first prototype was 3D printed and further tested on two of the wheelchairs we had access to. Firstly, from the testing on the Robo-lab wheelchair, it was clearly seen that the longer inner arm was more convenient, and in addition to this, it should be even longer. What's more, it was also discovered that fixing the arm on the wheelchair was more challenging than expected, so that clearly would be something we would need to focus on for the next prototype. Furthermore, it was also observed that the sensor arm would need cable holes for when the sensor was added to it, so this is another aspect to take into account when designing for the next prototype.

image.png
image.png
image.png

After numerous co-design sessions with multiple materials, further research, interviews and discussions, some ideas were further developed and were made tangible. Firstly, after having chosen the sensing system we wanted to use, it was decided that the system would fit the wheelchair best if a type of “camera arm or tripod” was used. Knowing that a list of requirements was made, and after researching the options that were in our scope, we decided none of these fit the requirements, therefore it was decided that it would be designed from zero.

 

Secondly, by testing it on Andrei’s wheelchair it was observed that again the total length of the arm had to be increased. Additionally, it was discussed that if the wheelchair would be functioning on a bumpy road, the sensor arm might not be stable enough, so a very effective way of fixing the arm onto the wheelchair should be looked into. Moreover, the height of the sensor rest should also be enlarged. A discussion with Bas (the ME student) also gave the following improvements: add a butterfly bolt and screw to adjust and fix the arm in place, perhaps add a joint onto the arm, add lines in the joints to help guide the user,  

image.png
image.png
TESTING
PROTOTYPE 1

After designing an extendable arm and a working face-detection camera, we meet again with Andrei to test how the arm can be positioned on his current wheelchair and whether or not the face sensor actually works as expected. 

image.png

When the arm is attached right behind his head, the camera module only appears to pick up Andrei’s face in specific spots and rotated at specific angles. The camera also appears to easily get blocked by Andrei’s headrest. For some spots, the camera appears to work reliably, but Andrei has to strain himself greatly to keep his head in the same spot long enough for the sensor to activate. These findings highlight that the reliability of this face detection camera module is limited in practice.

image.png
image.png

For a final product, it can be considered to use a different form of optical sensor in the form of a module that detects the position of the eyes instead of the entire face. This could make it easier for Andrei to activate, as he showed great accuracy in his eyes compared to his neck/head movement, and it would also take far less effort to move just his eyes compared to his entire head. 

As for the appearance and experience of the arm, Andrei agrees with the proposal to change the color of the arm from a bright color to black, as this would give it a more modern and “cool” feel. Andrei is also excited about the idea of featuring his art on the arm in the form of swappable magnets, making it even further personalizable. Adding a diagram of the basic function of the arm, as well as instructions on how to move it out of the way safely, can also be added on the side of the arm to clarify for Andrei’s caretakers. 

PROTOTYPE 2

By the given feedback from the Ability Tech team & the evaluation with Andrei a second prototype is tested.

 

This second one consists of three arms of longer dimensions, with butterfly bolts and screws and holes for the cables.
The prototype is tested on the wheelchair in RoboLab, and it is concluded that the second arm of the prototype can be a bit longer to make sure Andrei can see the sensor and increase mobility of the arm. 

image.png
image.png
image.png
concept development

Taking into account all the comments and observations from the evaluation with Andrei, as well as all research done before, namely focusing on how to make the product invoke more excitement than pity, the design of the arm is updated. Collages are made to further incorporate this feeling of awe and excitement feature mainly modernistic design, such as lamps and other decorative pieces, containing sleek shapes and futuristic minimalistic components.

Untitled_Artwork(107).png

When translated into the actual design, the arm now has a sleeker and cleaner look with many rounded elements. The previously relatively primitive hinges are replaced by disk-shaped hinges with a hole through the middle. While still clearly recognizable as a hinge point, it is not immediately clear how these hinges actually function on the inside, further enhancing futurism and feelings of amazement and curiosity. The cables connecting the processing unit to the camera, which is now shaped reminiscent of an eyeball, are packed tightly along the outside of the arm, as well as freely wrapping around the hinges at the hinge points.

13a57f65-1a5e-4ead-8a5b-6995773132f1.JPG

For the prototype a solution with a hollow screw is came up with. Furthermore, Andrei’s art can now be featured on the inside of the arm segments with the addition of magnetic strips. The final design can be seen in the image below.

Final prototype
image.png
image.png

The final prototype consists of  3D printed parts that make up the arm, connected by a snap fit to allow for rotation. The sides have magnetic tape, in which magnets featuring Andrei’s art can be placed and changed when wished. The outer part of the arm has an image with instructions about folding the arm for caretakers to move it and place it back when required..

image.png
image.png
image.png

Images of the real prototype on the real wheelchair.

programming

Person sensor

The first approach to be able to recognize someone looking at the sensor is using a person sensor. The person sensor is a pre-programmed sensor that recognizes faces by making use of a built- in camera module. In the circuit it is connected to a buzzer that goes off when someone is facing the sensor. 

An example code comes with the sensor, however it was adjusted to fit the requirements of this project. 

Every 0.2 seconds the camera takes a new picture to check if someone is looking at the sensor. What the project group added to the code is that the buzzer only goes off after looking at the sensor for 1 second, so after 5 positive output signals (0.2 seconds times 5). This makes sure the sensor won’t go off by accidentally looking in the sensor when for example moving your head. 

When one positive output is created, followed up by one negative output signal (no face), the code will reset itself and start at 0 again for counting up till 5 in order to create a positive output signal. 

This sensor is not as accurate as eye recognition software, since it recognizes the whole face. In Andrei’s case this means he has to move his head in the direction of the sensor, not only look at it. This might require more effort than eye detection software, making this a less good option. The sensor is tested with Andrei, results can be found under ‘Testing prototype 1’. 

Picture1.jpg

Eye recognition software

After making the person sensor, which was easy, the project group found some time to also look into eye recognition, more difficult but also more accurate.
By using chatGPT a way was found to recognise eyes by uploading pictures and identifying positive and negative images. This code is made in Python and runs on the laptop. After that, a camera module is connected to a Raspberry Pi Pico in order to be able to use the code without the laptop, but on the MyEyii-arm. However, in order to upload Python code onto a raspberry pi pico the code has to be adjusted. The project group had some difficulties with this and decided to put the focus more on the arm design than on the programming of the eye sensor, since it is very complex. 

Screenshot 2024-04-12 143306.png
bottom of page