2023 HDR Curtin round - Edge computing efficient AI
Applications open: 8/07/2022
Applications close: 18/08/2022
About this scholarship
This research aims to develop deep learning architectures and algorithms for attention based local processing. The attention mechanism identifies small regions of interest worth transmitting for deeper analysis in a manner analogues to human eye gaze, which is guided by a combination of local scale feature recognition and high level task based guidance. Thus, most processing is local to the edge device. We can compare human attention and computer vision transformer-based attention models, to implement more ‘human-like’ models (based on learning human attention behaviours), for three reasons. Firstly, human attention is more robust against adversarial attacks (modifications to an image or scene to confuse object detection), being based on both top-down intention direction from some task based central model as well as vision based bottom up attention. Secondly, these models will be more amenable to accuracy-retaining model simplification, leading to much quicker computations, and will allow the same edge device code to on-the-fly recognise different properties in the same images depending on the current task – as modified by central guidance. Thirdly, results will be more humanly transparent as we can report “the model saw this and decided that” etc. The analogy is that ‘visual cortex’ local processing is on the edge device, while the ‘frontal cortex’ central guidance is on a central device connected to the edge device by a high speed 5G connection.
This research aims to apply these methods to measures of engagement and concentration during receiving of advice, which will ultimately be incorporated integrally into future portable devices including smartphones. We will make use of established measures of cognitive effort (e.g. pupillometry, behavioural) in combination with engineering/data science (i.e. AI) approaches to examine these effects in students during teaching events with a view to later extension to adults receiving health advice.
The research will aim to:
1. Develop and test deep learning algorithms for attention based local processing including the trade-offs between the size of the model and accuracy as well as practical effectiveness.
2. Investigate properties of the algorithms developed in terms of their resistance to adversarial attacks and explainability / transparency properties.
An Internship opportunity may also be available with this project.
- Future Students
Faculty of Science & Engineering
- Science courses
- Engineering courses
- Western Australian School of Mines (WASM)
- Higher Degree by Research
- Australian Citizen
- Australian Permanent Resident
- New Zealand Citizen
- Permanent Humanitarian Visa
- International Student
- Merit Based
The annual scholarship package (stipend and tuition fees) is approx. $60,000 - $70,000 p.a.
Successful HDR applicants for admission will receive a 100% fee offset for up to 4 years, stipend scholarships, valued at approx. $28,800 p.a. for up to a maximum of 3.5 years, are determined via a competitive selection process. Applicants will be notified of the scholarship outcome in November 2022.
For detailed information, visit: Research Training Program (RTP) Scholarships | Curtin University, Perth, Australia.
All applicable HDR courses
The applicant will be familiar with deep learning approaches including neural architecture search and computer vision applications.
Previous experience with multi-sensor fusion and eye gaze or human sensor data will be beneficial. Good written and spoken English is essential.
If this project excites you, and your research skills and experience are a good fit for this specific project, you should contact the Project Lead (listed below in the enquires section) via the Expression of Interest (EOI) form.
Eligible to enrol in a Higher Degree by Research Course at Curtin University by March 2023
To enquire about this project opportunity that includes a scholarship application, contact the Project lead, Professor Tom Gedeon via the EOI form above.