๐ฅ 3 conferences
๐ค 3 talks
๐
Years active: 2017 to 2020
Dr Benedict R. Gaster is a senior lectuer at University of West of England, he is the directory of the Bristol Computing Lab, which within he also leads the Physical Computing group. He research focuses on the design embedded platforms for IoT, he is the co-founder of Bristol LoRaWAN a low power wide area network for Bristol city, is the technical lead for city wide project on city pollution monitoring for communities, having developed UWE Sense a hardware platform for cheap sensing, which launches in December 2017. Along with his PhD students and in collaboration with UWE's music tech department, is developing a new audio platform based on ARM micro-controllers using the Rust programming language to build faster and more secure sound!
Previously Benedict work at Qualcomm and AMD where he was a co-designer on the programming language OpenCL, including the lead developer on AMD OpenCL compiler. He has a PhD in computer science for his work on type systems for extensible records and variants. He has published extensively, has given numerous presentations, including one last year at FOSDEM on LoRaWAN Bristol.
3 known conferences
This talk will introduce the Muses project, which applies programming language theory and practice, physical computing, networking, and musicial theory to design and implementation of Digital Musical Instruments. Rust is a key ingredient in the Muses project, providing a robust and performant foundation for cross platform, desktop, and embedded system development.
The talk will give a brief introdution to Muses project as a whole and then focus on the use of Rust in developing a selection of very different components in the system and its benefits for these wildy varing use cases.
Demos of the Digitial Musical Instruments with Rust at their heart will shown through out the talk.
Controller and gesture interaction with audio and/or visual media is today ubiquitous, requiring the development of intuitive software solutions for interaction design. Designing and building these interfaces often require extensive domain expertise in audio and visual media creation, e.g. the musician, but additionally in engineering and software development. In this talk we focus on custom controller-based interactive systems for sound and musical performance, with a focus on an intuitive and simple design process that is accessible to artists.
A large part of the software developed for these systems is low-level system code, where direct access to hardware and understandable performance are hard requirements for these systems. Historically, these systems are written in C/C++ and in the case of embedded systems C is still the language of choice. With the emergence of the system programming language Rust an alternative for developing these systems is now with us with its support for high-level features such as traits, type inference, pattern matching, and of course it's affine based type system for pointers.
This talk will introduce the Muses project, which applies programming language theory and practice, physical computing, networking, and musical theory to design and implementation of Digital Musical Instruments. Rust is a key ingredient in the Muses project, providing a robust and performant foundation for cross platform, desktop, and embedded system development.
A high-level overview of the schedule is:
The demonstration will include the following physical components:
Custom interface
Raspberry PI for Sound
The framework also includes an approach to automatically generating interfaces from a DSL for SVG interfaces, written largely in Haskell, but with a tessellation pipeline written in Rust. However, while this will be mentioned in passing it is not the intention of this talk to cover this aspect of the system in detail. (For more information on this, see the provided link for the project website and associated papers, also linked from the site.)
Knowledge of programming will be expected and prior use of C/C++, Rust, or other systems programming language would be useful.
Audio topics will be introduced through out the talk and it is not expected that audience members have a musical background.
Dr Benedict R. Gaster is an Associate Professor at University of West of England, he is the director of the Computer Science Research Centre, which within he also leads the Physical Computing group. He research focuses on the design embedded platforms for musical expression and more generally the IoT, he is the co-founder of Bristol LoRaWAN a low power wide area network for Bristol city, is the technical lead for city wide project on city pollution monitoring for communities, having developed UWE Sense a hardware platform for cheap sensing. Along with his PhD students and in collaboration with UWE's music tech department, is developing a new audio platform based on ARM micro-controllers using the Rust programming language to build faster and more robust sound!
Previously Benedict work at Qualcomm and AMD where he was a co-designer on the programming language OpenCL, including the lead developer on AMD's OpenCL compiler. He has a PhD in computer science for his work on type systems for extensible records and variants. He has published extensively, has given numerous presentations, including ones at FOSDEM on Rust and LoRaWAN.
Below are are some examples of recent talks:
Rustyarm is a project in the Physical Computing group at the University of West of England looking at application of Rust on embedded micro controllers. UWE Sense is a new hardware and software platform for IoT, build with ARM micro controllers, Bluetooth LE and LoRaWAN, which runs a software stack written completely in Rust. While UWE Sense is a close to the metal implementation, UWE Audio, a new hardware platform for studying high performance audio using ARM micro controllers, uses Rust to implement a monadic reactive graph, supporting both an offline compiler and and Embedded DSL. UWE Audio uses safe Rust, for example, describing domain clock as generic associated types, providing both compile time guarantees that multiple streams will not be incorrectly sequenced at different sample rates, and the ability to dynamically compile for different parts of the system.
In this talk I will provide a high-level overview of the Rustyarm project, including how using Rust has made this project interesting, but also enabled providing guarantees with respect to the audio scheduler, for example. However, Rust has some short comings in the embedded domain and we provide details on some of these and what we and the wider community are doing to address them. As an example of Rustโs application in the embedded domain we present early work on UWE Audio and hardware and software platform for building digital music instruments, which as already noted is programmed with solely in Rust.
Rustyarm is a project in the Physical Computing group at the University of West of England looking at application of Rust on embedded micro controllers. UWE Sense is a new hardware and software platform for IoT, build with ARM micro controllers, Bluetooth LE and LoRaWAN, which runs a software stack written completely in Rust. While UWE Sense is a close to the metal implementation, UWE Audio, a new hardware platform for studying high performance audio using ARM micro controllers, uses Rust to implement a monadic reactive graph, supporting both an offline compiler and and Embedded DSL. UWE Audio uses safe Rust, for example, describing domain clock as generic associated types, providing both compile time guarantees that multiple streams will not be incorrectly sequenced at different sample rates, and the ability to dynamically compile for different parts of the system.
In this talk I will provide a high-level overview of the Rustyarm project, including how using Rust has made this project interesting, but also enabled providing guarantees with respect to the audio scheduler, for example. However, Rust has some short comings in the embedded domain and we provide details on some of these and what we and the wider community are doing to address them. As an example of Rustโs application in the embedded domain we present early work on UWE Audio and hardware and software platform for building digital music instruments, which as already noted is programmed with solely in Rust.
During the talk I will give a demonstration of UWE Audio and our embedded audio DSL, written in Rust. I also plan to have a number of the UWE Sense modules for people to look at, there is an App that that they can download, which talks to the sensors and logs dat to an open cloud infrastructure. The App is not developed in Rust, Nativescript is used, but the software for the sensors is. I don't plan to talk in detail about this part of our work, but I can provide links to our website and our partners, which will be launched in December 2017, and links to the software repos.
Full disclosure: UWE Audio is a reasonably new project and while we have a working system it would be misleading to say it is a complete project. For example, as our hardware platform has two ARM micro-processors, one for the control domain and one for the audio/cv domain our current compiler produces to Rust programs that are compiled separately and flashed to the devices. Our long term goal is to have the controller deploy DSP graphs to the audio processor dynamically via a Rust based API, simply in concept to OpenCL, but we are still quite a long way from reaching that final goal. That being said the project has been driven from the start with the goal of investigating Rust as an alternative to C for embedded programming and it's particular application in the audio domain and for this I believe it would be an interesting talk at FOSDEM.
Everyone is excited about the The Internet of Things (IoT) and the possibilities of really seeing the democratization of the internet, devices for everyone needs, not just a few! If we are to achieve, then these devices must be design and built by everyone, we must create a Zine like industry, beyond the current makers of today, to enable people of all ages, gender (including non-binary), and race, to build devices suited to their own needs. LoRaWAN is a Low Power Wide Area Network (LPWAN) specification intended for wireless battery operated Things in regional, national or global networks. LoRaWAN target key requirements of Internet of Things such as secure bi-directional communication, mobility and localization services.
In this talk, I will introduce LoRaWAN as a key radio technology for IoT and walk through why it is a technologically important development as I show how to build LoRaWAN node applications to explore the possibility of IoT. There are a number of LoRaWAN networks emerging across Europe and I will highlight the political importance of why these networks should be open, supporting Open Data and Open Science and empowering the development of a new set application domains.
The lecture will involve a demonstration of an LoRaWAN application that will showcase the both the node and Gateway aspects of a deployed network.
Soon, everything on Earth will be connected via peer-to-peer networking and/or the public Internet, with a multitude of sensor-driven devices dramatically changing our lives and our environment. These devices will be based on a wide variety of devices, ranging from tiny (e.g., microcontrollers) to huge (e.g., cloud servers), with one thing in common: they will require a radio connection to a Gateway that is connected to the internet, in some form or another. While Bluetooth LE or even Wi-fi might be used around the "smart" house or within a limited range, the battery limitations of the later and the range limitations of the former mean it is unlikely that these standards will form the backbone of a IoT network.
For IoT there are a number of competing radio standards, e.g. LoRa and SigFox, both are long range and provide the ability to build nodes that are very low-power, with a potential battery life of 2 or more years, while providing a long range, often in access of 10km. A key feature of these standards is they are low-bandwidth, often each message is limited to 100 or so bytes, for SigFox even less. Each standard has its drawbacks but SigFox requires more expensive chipsets for the Gateway side of things, which is not the case for LoRa, and as such this has seen the development of community, crowd sourced, LoRa networks, based on LoRaWAN. One example of this is The Things Network, who community has been developing Gateways, nodes, cloud backends, and software to run on these, all open source, supporting open data and open science. In Bristol we are deploying a LoRaWAN network built on existing wireless infrastructure locations, provided by Bristol Wireless.
In this lecture I will introduce LoRaWAN from a technical perspective, providing examples, and also look at the IoT and LoRaWAN networks from a political perspective.
I will bring a portable LoRaWAN network that we have developed at the University of West of England, mostly for testing radio capabilities but also allowing us to demo LoRaWAN and IoT on the move, and also some example nodes, some of which will be used by the audience. This will demonstrate the use of The Things Network backend infrastructure to provide an internet backend and bring the internet part of Internet of Things. We will provide demos that the audience will be able to interact with during the session and see the results live during the talk.