Anbieter zum Thema
Architecture
The manycore real-time OS is based on following manycore processor assumption:
- 1. Cache coherency are not always available
- 2. Memory sharing is possible, but expensive in principle
- 3. Each core can communicate each other via some sort of inter-core transport
From software system point of view, we have assumed following requirement:
- 1. The application should not need to be aware of how many cores are there, nor on which core it runs.
- 2. Applications are mostly soft-real-time with some hard-real-time portions, and require mostly fixed priority based scheduling for reuse of existing applications
From these hardware and software requirement, with knowledge of real-time OS implementation architectures, including that of multi-core, we naturally came down to following basic properties of manycore real-time OS architecture:
- 1. A small micro-kernel resides on each core, all together forming a distributed kernel system
- 2. The micro-kernel features message manager, core-local scheduler with basic task management, core-local interrupt manager, and core-local memory manager
- 3. The kernel APIs are core-transparent and can be used over all cores, presenting an SMP view to the application, providing portability and performance scalability over a changing number of cores
- 4. All the other OS services are implemented as server threads distributed over multiple cores. The server (OS) API libraries are used that wraps the message passing between the client and the server.
- 5. The server is multiplexed according to the locality and also to distribute the service workload.
- 6. Threads and servers are named, and managed by a Name Service. The Name server is used to discover the service by clients, and also to map appropriate server instance in the case for a multiplexed server
- 7. A thread scheduler, which is a server thread, schedules threads over all the cores within a scheduling cluster, as each kernel only schedules threads within its core
- 8. Clustering or core-grouping can be used to partition resources and server instances as the number of cores becomes large. Also, when there are separate processor clusters with separate cluster local shared memory, they are managed as two separate scheduling clusters.
- 9. Not-interrupt driven, fast inter-core communication primitives are available for use by the higher priority hard real-time threads or for a set of threads that have a high frequency of communication
- 10. Interrupts can be supported on any core. Since the number of device driver threads are limited relative to that of application threads, most cores will only serve inter-core communication interrupts.
- 11. Thread-core affinity feature is available for threads such as physical device driver threads, along with the interrupt handlers coupled with the driver threads to minimize unnecessary inter-core communication.
Semi-Priority Scheduling
One of our goals is to achieve real-time scheduling while attaining high throughputs. These two are competing factors - to attain higher throughputs, it is necessary to perform load-balancing that will introduce jitters making it difficult to estimate how much the time exactly it takes, affecting the real-time capability.
On the other hand, prohibiting the load-balancing to assure the real-time capability will lead to lower average throughputs due to inefficient use of processors in terms of non-real-time performance.
To cope with this issue, we developed a novel scheduling policy called semi-priority based scheduling. The key idea is to make use of large number of cores to assure real-time scheduling of higher priority threads, while other lower priority threads are load-balanced based on the amount of work they perform, in addition to their priorities. The overview of the semi-priority scheduling is depicted in Figure 2.
* Masaki Gondo is the Software CTO at eSOL, the company that provides RTOS and tools, as well as various engineering services. He has 20 years of experience in the field of OS architecture and related technologies for use in wide range of embedded systems. He also acts as Multicore Association SHIM Working Group chair, Vice-chair of Embedded Multicore Consortium, visiting research fellow at Advanced Multicore Processor Research Institute at Waseda University, steering committee of T-Engine forum, etc.
(ID:44287531)