Osmin scope list
Empty scopes
An empty scope is an address within code memory with no associated registers or instructions. The Osmin source presently has two.
firmament::
Some of Osmin’s code is only executed during startup. The memory this code occupies could, in principle, be reclaimed for other uses at the end of startup. Whether or not reclamation is worthwhile depends on how many pages of code memory (if any) would actually be freed. This number of pages is not known as of December 2023, but is likely to end up being either 0 or 1.
Osmin’s firmament::
scope indicates the lowest code memory address that could conceivably be repurposed after initialization. For more information, see “Order of kernel code memory” in Memory structures.
safe.to.write::
Osmin wipes the code memory at initialization and shutdown, but wiping the kernel isn’t feasible. safe.to.write::
is the lowest code memory address that is above the kernel that can be erased. Note that the glossary (superuser’s constants initialization) from the assembler is above safe.to.write::
, so the code memory wipe has to be after the second glossary call. For more information, see “Order of kernel code memory” in Memory structures.
In addition to wiping from safe.to.write::
upward, code memory is also wiped from address 0 up to but not including main::
. Note that this overwrites the original CALL
instruction to the glossary, which will not be needed again.
Data-only scopes
The following scopes only exist as namespaces for “global” kept registers. They don’t contain any code at all. These scopes are documented in Kept registers.
api:: code:: count:: dict:: ig:: index:: p.addr:: page:: text.seg:: user.pool:: v.addr::
Scopes to initialize the operating system
When the power comes up on a Dauug|36 machine, the order of what happens is:
1. firmware loading | The firmware loader circuit controlled by NOR flash. |
2. bootloader | A small firmware program loads Osmin from external media. |
3. main:: |
This merely branches to wipe.at.startup:: . |
4. wipe.at.startup:: |
This calls measure.dims:: and wipes primary storage. |
5. measure.dims:: |
This determines the size of builder-configurable RAMs. |
6. boot:: |
This initializes Osmin data structures and programs. |
7. run.system:: |
This is Osmin’s scheduler. |
Steps 1 and 2 should be documentated, but this section is about Osmin’s portion, which begins with main::
. More detail on these scopes follow.
boot::
Except for a lot of intructions that print trace and status information, scope boot::
doesn’t contain much code. When boot::
is called, primary storage has already been wiped. boot::
calls several setup.*::
scopes to initialize Osmin’s memory.
After memory initialization, Osmin is ready for boot::
to load programs and schedule program instances as the system requires. This is requested via calls to enschedule::
. Then in its final instruction, boot::
passes control to run.system::
.
jinx.measure.dims::
This scope bypasses measure.dims::
and installs hardcoded quantities in lieu of measuring the RAMs that are installed. Used to speed electrical simulations, this scope should not be used on physical machines.
Important. count::P.slots.per.user
is a critical parameter for security. Although it’s possible to accelerate certain tests by reducing this number via jinx.measure.dims::
to indicate less than the page table’s correct size, so doing can undermine page table integrity, physical memory integrity, and program separation.
main::
When the assembler generates machine code for a program, the first instruction is a CALL
to the glossary, which the assembler places after the last instruction in the source. The second instruction begins scope main::
, which the assembler automatically labels. This is convenient for writing programs, but the lack of an explicit label can make this scope harder to locate.
As mentioned above under firmament::
, we want Osmin’s startup code to occupy the highest portion of kernel code memory, but main::
is at the second-lowest address. So main::
isn’t a suitable scope for holding startup code. Instead, main::
immediately transfers control to wipe.at.startup::
, which is located above firmament::
for possible reclamation of its code memory after startup. Instead of returning, wipe.at.startup::
will transfer control when it is done to boot::
.
When running an electrical simulation instead of a real machine, main::
may transfer control to memset.wipe.at.startup::
instead of wipe.at.startup::
in order to speed testing.
measure.dims::
Most of the RAMs in Dauug|36 computers have fixed sizes specified by the architecture. But a few sizes are left to the builder, leaving Osmin to live probe memory to determine these sizes. The measure.dims::
scope does this in the midst of wiping primary storage. The dimensions measured include:
1. The size in words of the code RAM (in words) is written to count::C.words
.
2. The size in words of the data RAM 0 (in words) is written to count::D0.words
if the chip is installed. Otherwise, 0 is written.
3. The size in words of the data RAM 1 (in words) is written to count::D1.words
if the chip is installed. Otherwise, 0 is written.
4. The number of page table entries per user is written to count::P.slots.per.user
, and 4096 times this number (the amount virtual memory that a user can obtain, in words) is written to count::P.words.per.user
.
Important. count::P.slots.per.user
is a critical parameter for security, because page table integrity (and therefore data memory integrity and program separation) depends on its accuracy.
5. The architecture’s maximum number of supported users, which is always 256, is written to count::max.users
. This number can in principle be lowered for a custom kernel without creating any security problems; however, little to no resources would be saved.
6. The address of the lowest write-protected word in data memory is written to p.addr::zero.page
. This address will depend on whether or not data RAM 0 is installed.
Although the Dauug|36 architecture can in principle run register-bound programs with neither data RAM 0 nor data RAM 1 installed, Osmin uses data memory and therefore requires at least one of the two data RAMs to be installed. Either one is fine.
Scopes to erase memory en masse
memset.wipe.at.shutdown::
This alternate version of wipe.at.shutdown::
uses the MEMSET
pseudo-instruction to speed simulations. It cannot work (and is not needed) on real machines.
This alternate version does not actually wipe the superuser or user stack memory, because stack wiping cannot be accelerated. Only simulations that use the memset.wipe.*
scopes are affected.
memset.wipe.at.startup::
This alternate version of wipe.at.startup::
uses the MEMSET
pseudo-instruction to speed simulations. It cannot work (and is not needed) on real machines.
This alternate version does not actually wipe the superuser or user stack memory, because stack wiping cannot be accelerated. Only simulations that use the memset.wipe.*
scopes are affected.
memset.wipe.code::
This alternate version of wipe.code::
uses the MEMSET
pseudo-instruction to speed simulations. It cannot work (and is not needed) on real machines.
memset.wipe.data.ram::
This alternate version of wipe.data.ram::
uses the MEMSET
pseudo-instruction to speed simulations. It cannot work (and is not needed) on real machines.
Note: As of 3 November 2023, this scope doesn’t exist, and the associated wiping is done by inline code. Because the task involved is really minimal.
memset.wipe.phys.page::
This alternate version of wipe.phys.page::
uses the MEMSET
pseudo-instruction to speed simulations. It cannot work (and is not needed) on real machines.
memset.wipe.user.page.table::
This alternate version of wipe.user.page.table::
uses the MEMSET
pseudo-instruction to speed simulations. It cannot work (and is not needed) on real machines.
memset.wipe.user.registers::
This alternate version of wipe.user.registers::
uses the MEMSET
pseudo-instruction to speed simulations. It cannot work (and is not needed) on real machines.
memset.wipe.user.stack::
This is an alternate version of wipe.user.stack::
that runs quickly in simulations. It does not use the MEMSET
pseudo-instruction, and it does not actually wipe the stack, but it’s named as if both these falsehoods are true for naming consistency.
The LFSR-indexed stack memory provides no programmatic access to deterministic locations, so the MEMSET
instruction is useless for clearing this memory. Moreover, this code is only for optionally speeding electrical simulations, and wiping stack memory during simulation won’t protect real machines from exploits. So the only “wipe” done by memset.wipe.user.stack::
is a single CALI
(call stack initialize) instruction.
wipe.at.shutdown::
This scope is intended for use prior to rebooting or powering down the system. Its purpose is to remove deterministically as much potentially sensitive data and metadata from system RAM as possible. One motivation for this purging is that an attacker may be about to gain physical or other extraordinary access to the machine.
It is not electrically possible to wipe firmware from RAM, because firmware writes are electrically locked out immediately after all firmware is loaded. If an adversary somehow gains read access to this residual firmware, she or he may be able to determine the most recently running firmware version. The use case to change this is limited, because most power-downs are unlikely to remove the flash memory used to load the firmware.
Most of the kernel (but no kernel data) will still be resident in code memory after everything else has been wiped. It would be relatively straightforward to add a feature that also wipes the kernel from code memory, but one isn’t present as of November 2023. The use case to add this is limited, because most power-downs are unlikely to remove the medium from which the kernel is loaded.
wipe.at.shutdown::
erases the contents of all writeable RAM (all of the non-firmware RAM). This has to be done in a particular order due to dependencies within the CPU and kernel. The order implemented is:
- preemptive multitasking timer setpoint
- data memory 0
- data memory 1
- page table
- all user registers
- call return stack for all users
- code memory except for most of the kernel
- call return stack for the kernel
- all kernel registers
Once the above RAM has been cleared, the kernel enters an infinite loop. Further processing will not be possible without physical intervention such as cycling the power.
wipe.at.startup::
This scope places all user-program-accessible and kernel-accessible RAM in a deterministic state early in the boot process. One advantage is that memory allocation can be more responsive, because all blocks already will be zeroed at the time of each request. Another benefit is to guarantee that no data is held over in RAM from one invocation of the operating system to the next. Also, ensuring the system always boots with the same memory content reduces opportunity for erratic kernel and user behavior.
wipe.at.startup::
erases the contents of all writeable RAM (all of the non-firmware RAM). This has to be done in a particular order due to dependencies within the CPU and kernel. The order implemented is:
- preemptive multitasking timer setpoint
- call return stack for the kernel
- all kernel registers
- (note 1)
- (note 2)
- code memory except for most of the kernel
- all user registers
- call return stack for all users
- page table
- data memory 0
- data memory 1
Note 1. Unfortunately, wiping the kernel registers just removed all register constants needed by the kernel. They are restored at this point by calling the kernel glossary code a second time.
Note 2. At this point, measure.dims::
is called to determine the sizes and configurations of installed SRAMs. Had measurement occurred earlier, wiping the kernel registers would have lost these important parameters.
wipe.code::
Input | wipe.code::from |
wipe.code::to |
|
% | |
Time complexity | O(n) (but no users are running) |
wipe.code::
fills a contiguous region of code memory with NOP
instructions. Inputs wipe.code::from
and wipe.code::to
indicate the lowest and highest addresses of the region. Both endpoints are included, and the selection granularity is one word.
wipe.code::
is called only at kernel startup and shutdown. It’s not used to remove code after a user has terminated; that task belongs to unload.text.segment::
.
wipe.code::
must be called with the CPU in PRIV
mode.
wipe.data.ram::
Input | page::p |
% | |
Output | page::p is overwritten |
% | |
Time complexity | O(n) (but no users are running) |
wipe.data.ram::
fills either of up to two data SRAM ICs with all zeros. To wipe RAM M0
, set page::p
to that RAM’s size in words, and call this scope. To wipe RAM M1
, set page::p
to that RAM’s size in words, also set bit 34 (chip select) of page::p
, and call this scope.
wipe.data.ram::
is called only at kernel startup and shutdown. It must be called with the CPU in PRIV
mode.
wipe.phys.page::
Input | page::p |
% | |
Output | kernel’s page table slot 0 is overwritten |
% | |
Time complexity | O(1) (page size is fixed by the architecture) |
wipe.phys.page
writes zeros to all 4096 words of a physical memory page. Input page::p
must be the base address of a valid physical page. It must be an exact multiple of 4096. The page must not be write-protected; that is, bit 35 of page::p
must be clear.
wipe.phys.page
must be called with the CPU in PRIV
mode. The STO2
(store twice) instruction is used to reduce execution time by about half. Virtual page 0 is used to preclude need for a CMP
(compare) instruction in the loop.
wipe.phys.page
is called not only at kernel startup and shutdown, but any time a physical page’s retain count reaches zero, which is typically but not necessarily when a user terminates.
Note: As of November 2023, virtual memory allocation and deallocation for users has not been implemented yet. There is not yet a possibility of reaching a zero retain count, nor any code to respond to it.
wipe.user.page.table::
Input | 8-bit user id from ff u |
count::P.words.per.user |
|
p.addr::zero.page |
|
% | |
Time complexity | O(1) (page table size is determined when board is soldered) |
wipe.user.page.table::
maps all virtual memory for the current user into the zero page, a read-only page of data memory containing all zeros. This scope must be called with the CPU in PRIV
mode and will return in PRIV
mode. The caller must ensure the multitasking timer has been disabled. Also prior to calling, ff u
must be loaded via the USER
instruction with the 8-bit user id identifying which program’s page table is to be wiped.
The caller doesn’t need to provide the inputs count::P.words.per.user
or p.addr::zero.page
, because these are set by measure.dims
when the kernel initializes.
wipe.user.registers::
Input | 8-bit user id from ff u |
% | |
Time complexity | O(1) |
wipe.user.registers::
zeros all registers belonging to the current user. This scope must be called with the CPU in PRIV
mode and will return in PRIV
mode. The caller must ensure the multitasking timer has been disabled. Also prior to calling, ff u
must be loaded via the USER
instruction with the 8-bit user id identifying which program’s page table is to be wiped.
If wipe.user.registers::
is called for the superuser, the glossary and any prior registers will be lost.
wipe.user.stack::
Input | wipe.user.stack::jump.addr |
8-bit user id from ff u |
|
% | |
Time complexity | O(1) |
wipe.user.stack::
floods the entire stack of 255 return addresses for the current user with the address hypothetical.return.address:
(see source code for location). This address will never actually be returned to, because (1) the kernel doesn’t return from main::
, and (2) users that underflow their stack are terminated by enschedule::
.
Note 1. It nonetheless may be a good idea to monitor hypothetical.return.address:
and terminate the user if control reaches it.
Note 2. This scope doesn’t attempt to control the state of flags that are pushed on the stack. Although the possibility of exploiting this seems extremely remote, it may be a good idea to force all flags to a known state.
This scope must be called with the CPU in PRIV
mode and will return in PRIV
mode. Prior to calling, ff u
must be loaded via the USER
instruction with the 8-bit user id identifying which program’s stack is to be wiped.
When wiping the stack for a user, input wipe.user.stack::jump.addr
must be zero. This signals wipe.user.stack::
that there is a valid return address on the superuser’s call stack, so the scope will exit by means of a RETURN
instruction. When wiping the kernel stack, the return address originally on the stack will be overwritten, so the caller must set wipe.user.stack::jump.addr
to a nonzero address where the scope will exit to via a JANY
(jump anywhere) instruction.
Scopes to manage data memory
alloc.and.retain.phys.page::
Output | page::p |
page::i is overwritten |
|
N flag |
|
% | |
Time complexity | O(1) |
alloc.and.retain.phys.page::
obtains an unused physical page of data memory, marks its reference count as 1, and returns its address in page::p
. While so doing, page::i
is overwritten, and the N
flag is cleared.
If this scope fails because no physical page remains to be allocated, page::p
is zeroed, and the N
flag is set. There is no ambiguity concerning the returned zero, because physical page 0 is reserved for use as the zero page and will never be allocated by alloc.and.retain.phys.page::
.
If this scope fails because the physical page pool is corrupt, a kernel panic is invoked.
further.retain.phys.page::
Input | page::p |
% | |
Output | page::p |
page::i is overwritten |
|
% | |
Time complexity | O(1) |
further.retain.phys.page::
accepts the address of an in-use physical page of data memory and increases its reference count by 1. This function is used to indicate physical pages that are mapped by more than one virtual page, so that the physical page is not deallocated until its last virtual page is unmapped.
Although the reference count increase is not checked for overflow, commercially available SRAM sizes are nowhere close to large enough for overflow to occur, unless this scope is called in an unintended manner that is not limited by page table memory.
A kernel panic may occur if the address supplied at page::p
does not reflect a valid, in-use physical memory page.
grow.superuser.memory::
Input | v.addr::used.to |
% | |
Output | v.addr::backed.to |
The superuser’s page table is updated. | |
% | |
Time complexity | O(memory size) (per-boot sum of all calls) |
All data memory used by Osmin’s kernel (superuser) is a single contiguous block starting at virtual address 0. This block grows monotonically as Osmin initializes. Once initialization is complete, no more kernel data memory is allocated or released until the system is shut down. A single pointer, v.addr::used.to
, indicates the kernel data memory block size; the words of the block span virtual addresses 0 through v.addr::used.to
− 1.
grow.superuser.memory::
allocates physical memory for the superuser’s page table on an as-needed basis. Input v.addr::used.to
must be one more than the highest virtual address requested. Output v.addr::backed.to
will be one more than the highest virtual address presently mapped. Frequently v.addr::backed.to
will be larger than v.addr::used.to
, because although virtual memory consumption may have single-word granularity, virtual memory allocation’s granularity is 4096 words.
A kernel panic could in principle occur if there is not enough physical memory available to fill the request. But this would require a smaller-than-commercial SRAM IC. If no data memory at all is installed, this scope never gets called.
index.to.phys.page::
Input | page::i |
% | |
Output | page::p |
% | |
Time complexity | O(1) |
Each physical data memory page in a Dauug|36 machine is assigned a unique index for pool management. The set of these indices is contiguously numbered from index 0. Because a given page exists in exactly one of two SRAM ICs that have non-contiguous physical addresses, two scopes exist to map from index to physical page and vice versa. Scope index.to.phys.page::
accepts a physical page index (for data memory) in page::i
and returns its physical address in page::p
.
A kernel panic can occur if an invalid (too large for the installed data memory) page index is specified.
phys.page.to.index::
Input | page::p |
% | |
Output | page::i |
% | |
Time complexity | O(1) |
Each physical data memory page in a Dauug|36 machine is assigned a unique index for pool management. The set of these indices is contiguously numbered from index 0. Because a given page exists in exactly one of two SRAM ICs that have non-contiguous physical addresses, two scopes exist to map from index to physical page and vice versa. Scope phys.page.to.index::
accepts a physical address (in data memory) in page::p
and returns its index within the physical page pool in page::i
.
A kernel panic can occur if an invalid address is specified.
release.phys.page::
Input | page::p |
% | |
Output | page::i is overwritten |
% | |
Time complexity | O(1) |
release.phys.page::
takes the address of an in-use physical data memory page at page::p
and reduces its reference count by 1, indicating that the caller has unmapped a a virtual page that pointed to this physical page. If the reference count is now zero, indicating that the physical page is no longer mapped to any virtual memory, the physical page is returned to the physical page pool (of unallocated pages).
A kernel panic can occur if page::p
is not a valid physical data memory page, is the zero page, or is an unallocated page.
setup.phys.page.pool::
Output | count::phys.pages.installed |
count::phys.pages.free |
|
index::next.free.phys.page |
|
v.addr::phys.page.pool |
|
% | |
Time complexity | O(data memory size) (called once per boot) |
setup.phys.page.pool::
initializes the data structures for Osmin’s map of physical data memory pages that are free and not free. See “Physical page pool” in Memory structures, as well as the indicated output variables in Kept registers, for specifics.
A kernel panic will occur if no data memory is installed.
Scopes to organize programs in code memory
hold.or.retain.user.program::
Input | text.seg::filename |
text.seg::allow.priv |
|
hold.or.retain.user.program::hold.or.retain |
|
% | |
Output | text.seg::result |
text.seg::allow.priv is cleared |
|
% | |
Time complexity | O(N) |
% | |
Reverse of | unhold.or.release.user.program:: |
The kernel does not call this scope directly, but instead calls either of two wrapper scopes: hold.user.program::
or retain.user.program::
.
hold.or.retain.user.program::
causes an executable program (binary machine language, not assembly language) to reside in code memory either indefinitely (hold) or until all running instances terminate (retain). See “Text segment pool” in Memory structures, with particular attention to offset 2, for additinal information.
When called, hold.or.retain.user.program::
first checks to see if the executable program identified by input text.seg::filename
currently has a text segment pool entry (and therefore is present in code memory). If there is no pool entry for the program, load.text.segment::
is called to load the program into code memory and add it to the text segment pool with a retain count of zero. What follows depends on the input hold.or.retain.user.program::hold.or.retain
.
If hold.or.retain.user.program::hold.or.retain
is 235, bit 35 of the program’s retain count is set in its text segment pool entry. This “hold bit” signals the kernel to keep the program in code memory whether the program is running or not. Advantages of keeping a non-running program resident include fast startup and a guarantee that the necessary code memory is on hand.
If hold.or.retain.user.program::hold.or.retain
is 1, the program’s retain count is incremented in its text segment pool entry. This count signals the kernel that an instance of the program is present in the schedule, therefore the program must remain in code memory exactly as-is and where-is.
When hold.or.retain.user.program::
succeeds, output text.seg::result
will be zero. Otherwise, text.seg::result
will indicate one of the following error codes:
text.seg::result |
Description |
...BRA`t |
program contains a branch instruction to external code |
...FMT`t |
any of a few executable file format errors |
...FNF`t |
executable program file not found |
...NOP`t |
last instruction of a code page other than mandatory NOP |
...OOM`t |
insufficient free code memory to load program |
...PRV`t |
unprivileged attempt to affect privileged program |
...TPF`t |
text pool is full |
Here are some hints for addressing these errors:
...BRA`t
shouldn’t occur unless some hanky-panky was involved creating the executable file. The assembler will only generate branches to labels or scopes within your program.
...FMT`t
means that the program file wasn’t produced by a working Dauug|36 assembler. Did you mistakenly name the wrong file, such as a program’s source code?
...FNF`t
is user error. Check that the executable file is present with the indicated name, and that the directory it’s in matches the electrical simulation’s .ns
test script.
...NOP`t
probably means that count::code.words.per.page
is not a multiple of the assembler’s -p
code page size option. Be sure to not use the cross assembler’s -b
option, except when assembling the OS kernel.
...OOM`t
means your program is larger than the unused code memory. Wow!
...PRV`t
means that although program text.seg::filename
contains privileged instructions, text.seg::allow.priv
was zero.
...TPF`t
means there isn’t a text pool entry available to load the program. The most likely cause is that you reduced count::max.users
to fewer than 256 in a custom kernel, and then tried to load more than that number of distinct programs. (Remember that the kernel and any subtasks it may contain count toward this total.)
NOT RTOS READY. Because this scope calls load.text.segment::
, time complexity can lead to unacceptable RTOS delays unless either (a) special precautions are written into the kernel, or (b) all programs are resident in code memory before RTOS constraints apply.
hold.user.program::
Input | text.seg::filename |
text.seg::allow.priv |
|
% | |
Output | text.seg::result |
text.seg::allow.priv is cleared |
|
% | |
Time complexity | O(N) |
% | |
Reverse of | unhold.user.program:: |
hold.user.program::
is a small wrapper (essentially an entry point) for hold.or.retain.user.program::
to accomplish the following.
Input text.seg::filename
identifies a runnable Dauug|36 program within the electrical simulation’s paravirtualized filesystem. If the program currently has a text segment pool entry (and therefore is present in code memory), bit 35 is set in its retention count. If the program does not already have a text segment pool entry, the program is loaded into code memory, a text segment pool entry is created for it, and its retention count is set to 235. More details appear under “Text segment pool” in Memory structures.
Several types of error could occur during this scope’s execution. text.seg::result
will either be zero, indicating that no error occurred, or a code to help indicate what happened. A list of error codes appears above under hold.or.retain.user.program::
.
hold.user.program::
only succeeds if privileges are sufficient. In particular, if the program to be acted on contains privileged instructions, but text.seg::allow.priv
is zero, the scope fails with text.seg::result
set to ...PRV`t
. There are two branches for this test, depending on whether the program is already in code memory (just look up in the text pool) or needs to be loaded (every instruction needs checked). As a precaution against unintended privileges later, text.seg::allow.priv
is always cleared before this scope returns.
NOT RTOS READY. Because this scope calls load.text.segment::
, time complexity can lead to unacceptable RTOS delays unless either (a) special precautions are written into the kernel, or (b) all programs are resident in code memory before RTOS constraints apply.
load.text.segment::
Input | text.seg::filename |
text.seg::allow.priv |
|
% | |
Output | text.seg::result |
text.seg::start.jump |
|
text.seg::allow.priv |
|
% | |
Time complexity | O(N) |
% | |
Reverse of | unload.text.segment:: |
load.text.segment::
is called when the kernel needs to open and read an executable file into code memory. Note that the file I/O is paravirtualized and simulator-provided, so an entirely different, more complex version must be written to run on real hardware. Also note that the paravirtualized I/O offers no access control (file permissions). This is a massive privilege hole that gives unlimited powers to non-privileged programs by altering privileged executables that will run later. This hole happens to be harmless on real machines, because the vulnerable filesystem never physically exists.
load.text.segment::
relocates and partitions the executable into JUMP
-chained pages so that code memory fragmentation won’t be a problem, as well as fills out any partial last page with NOP
instructions. The added JUMP
s overwrite NOP
s that the assembler automatically inserts. Numerous checks are made during load.text.segment::
to assure correctness and security. Below are the error conditions that may be returned in text.seg::result
. If any of these errors occur, the entire transaction is backed out. If all succeeds, text.seg::result
will be zero.
...FNF`t |
File to be loaded does not exist. |
...FMT`t |
File does not start with ...d36`t magic number. |
...OOM`t |
Not enough code memory is free to load this executable. |
...NOP`t |
Last instruction of a page is not NOP as required. |
...BRA`t |
Instruction branches to outside of program’s code memory. |
...FMT`t |
Command within executable file is missing or misplaced. |
...PRV`t |
Privileged instructions found, but allow.priv is 0. |
text.seg::allow.priv
only changes if everything is successful, and only then if it was nonzero and the program contains no privileged instructions. In this case, text.seg::allow.priv
is overwritten with zero so that the caller can inspect it to determine if the loaded program is or is not actually privileged.
The program’s first address in code memory as loaded is returned in text.seg::start.jump
with a JUMP
instruction placed in its upper nine bits. Because Osmin relocates programs when they are loaded, programs should make no assumptions as to the address of any instruction in code memory.
At 140 instructions, load.text.segment::
is the kernel’s longest and most-complex scope as of 2 April 2024. (The second-longest scope is the glossary, which has 116 instructions but no conditional branches. It is not at all complex.)
LATENT BUG. It looks like the JUMP
at the end of the list of free code segments has become a somewhat-arbitrary odd number that is not maintained current or tested. Instead, the list end appears to be unintentionally dead-reckoned via count::code.pages.free
. This might or might not work correctly.
setup.code.pool::
Output | v.addr::code.pool |
% | |
Time complexity | O(code memory size) (called once per boot) |
setup.code.pool::
collects all code memory that is not occupied by the kernel into a linked list of pages for allocation to users. It then stores a JUMP
instruction to the first unallocated page in v.addr::code.pool
.
The unallocated pages are filled with NOP
instructions, except for the last word of each page, which contains a JUMP
instruction to the next unallocated page. The JUMP
instruction at the end of the last unallocated page, having no further page to jump to, is a single-instruction infinite loop. See “How code memory pages are stored” in Memory structures for more information.
LATENT BUG. setup.code.pool::
does not appear to consider what happens if no non-kernel code memory pages are available. How can v.addr::code.pool
contain a JUMP
in that case? (Commercial RAM sizes likely proclude this from being possible assuming the kernel doesn’t grow much, but this isn’t a safe assumption.)
setup.text.pool::
Output | v.addr::text.pool |
% | |
Time complexity | O(1) (bounded by architecture’s limit of 256 users) |
This scope creates the text segment pool. No entries are present yet, so it’s just a wrapper around dict.new::
. For the element format, see “Text segment pool” in Memory structures.
release.user.program::
Input | text.seg::filename |
% | |
Output | text.seg::result |
text.seg::allow.priv is cleared |
|
% | |
Time complexity | O(N) |
% | |
Reverse of | retain.user.program:: |
release.user.program::
is a small wrapper (essentially an entry point) for unhold.or.release.user.program::
. release.user.program::
starts down the path of removing a terminated program from code memory. The process is more or less as follows.
1. If the program is not in memory, nothing happens. (TODO It’s likely to turn out that this condition always represents an error by the caller. If so, I may change the response to a kernel panic.)
2. The retention count is decremented without checking its present value.
Note. Step 2 makes almost the opposite assumption of step 1, where unneeded calls are harmless. The non-check here assumes that the caller hasn’t lost track of what’s running. It’s true that the caller should be defect-free in that aspect, but stronger verification would be helpful. See “Control path assertions” under Missing features for a potential tool to aid some verifications.
3. If the retention count is zero, the text segment is removed and its text pool entry is deleted.
It’s not necessary to check the result of this scope, which always sets text.seg::result
to zero to indicate success. The only anticipated error is a kernel panic. Unlike unhold.user.program::
, no check of text.seg::allow.priv
is made because all programs have a right to terminate whether or not they are privileged.
retain.user.program::
Input | text.seg::filename |
text.seg::allow.priv |
|
% | |
Output | text.seg::result |
text.seg::allow.priv is cleared |
|
% | |
Time complexity | O(N) |
% | |
Reverse of | release.user.program:: |
retain.user.program::
is a small wrapper (essentially an entry point) for hold.or.retain.user.program::
to accomplish the following.
Input text.seg::filename
identifies a runnable Dauug|36 program within the electrical simulation’s paravirtualized filesystem. If the program currently has a text segment pool entry (and therefore is present in code memory), its retention count in the pool is increased by one. If the program does not already have a text segment pool entry, the program is loaded into code memory, a text segment pool entry is created for it, and its retention count is set to one. More details appear under “Text segment pool” in Memory structures.
Several types of error could occur during this scope’s execution. text.seg::result
will either be zero, indicating that no error occurred, or a code to help indicate what happened. A list of error codes appears above under hold.or.retain.user.program::
.
retain.user.program::
only succeeds if privileges are sufficient. In particular, if the program to be acted on contains privileged instructions, but text.seg::allow.priv
is zero, the scope fails with text.seg::result
set to ...PRV`t
. There are two branches for this test, depending on whether the program is already in code memory (just look up in the text pool) or needs to be loaded (every instruction needs checked). As a precaution against unintended privileges later, text.seg::allow.priv
is always cleared before this scope returns.
NOT RTOS READY. Because this scope calls load.text.segment::
, time complexity can lead to unacceptable RTOS delays unless either (a) special precautions are written into the kernel, or (b) all programs are resident in code memory before RTOS constraints apply.
unhold.or.release.user.program::
Input | text.seg::filename |
text.seg::allow.priv |
|
hold.or.retain.user.program::hold.or.retain |
|
% | |
Output | text.seg::result |
text.seg::allow.priv is cleared unconditionally |
|
% | |
Time complexity | O(N) |
% | |
Reverse of | hold.or.retain.user.program:: |
This scope implements the functionality behind the wrappers unhold.user.program::
and release.user.program::
.
The input hold.or.retain.user.program::hold.or.retain
must either be 20 (to release) or 235 (to unhold). For release, the retention count is presumed to be positive and is decremented. This occurs at termination of a running instance of the program. For unhold, no assumptions are made and bit 35 of the retention count is cleared. This occurs when it is no longer desirable to hold a non-running program in code memory.
The only error condition is an attempt to unhold a privileged program, but text.seg::allow.priv
is zero indicating the attempt itself is not privileged. In this case the retention count is not altered, and text.seg::result
will return ...PRV`t
. Otherwise text.seg::result
will be zero.
If the retention count is zero after the unhold or release adjustment, the text segment is wiped from code memory, the vacated code memory is returned to its free list, and the text segment pool entry for the program is removed and wiped.
NOT RTOS READY. The issue is unload.text.segment::
can take a while. There is a key lookup also, but it is bounded by the number of running users, which can’t exceed 255.
unhold.user.program::
Input | text.seg::filename |
text.seg::allow.priv |
|
% | |
Output | text.seg::result |
text.seg::allow.priv is cleared |
|
% | |
Time complexity | O(N) |
% | |
Reverse of | hold.user.program:: |
unhold.user.program::
is a small wrapper (essentially an entry point) for unhold.or.release.user.program::
. unhold.user.program::
removes an administrative demand that a user remain in memory when it is not running. The process is more or less as follows.
1. If the program is not in memory, nothing happens. (TODO It’s likely to turn out that this condition always represents an error by the caller. If so, I may change the response to a kernel panic.)
2. The “hold flag”, implemented as bit 35 of the retention count, is cleared without checking its present value.
3. If the retention count is zero, the text segment is removed and its text pool entry is deleted.
Other than a kernel panic, the only error that may occur is an attempt to unhold a privileged program with text.seg::allow.priv
equal to zero. This would return ...PRV`t
in text.seg::result
. Otherwise, text.seg::result
will be zero to indicate success.
unload.text.segment::
Input | text.seg::start.jump |
% | |
Time complexity | O(N) |
This scope wipes the text segment identified by text.seg::start.jump
from code memory and returns its pages to the free list at v.addr::code.pool
.
BUG. The free list should be in a different scope, because code memory is not virtualized.
This scope succeeds unconditionally, assuming the kernel doesn’t panic.
Because the free list is singly linked, a naive implementation of unload.text.segment::
would reverse the page order. To appease developers who may try to understand diagnostic output, this implementation reverses the pages and then frees them in an effort to favor ascending page orders. The implementation would be simpler—therefore “better”—if it didn’t worry about that.
LATENT BUG. It looks like the JUMP
at the end of the list of free code segments has become a somewhat-arbitrary odd number that is not maintained current or tested. Instead, the list end appears to be unintentionally dead-reckoned via count::code.pages.free
. This might or might not work correctly.
Scopes to manage user ids and process ids
get.from.user.pool::
Output | user.pool::uid |
N flag |
|
% | |
Time complexity | O(1) |
This scope is called when an id is needed for a new process. get.from.user.pool::
obtains the next available extended user id from the user id pool at the beginning of its circular buffer, placing it in user.pool::uid
. The N
flag is cleared to indicate the operation succeded.
If the user id pool is empty (generally indicating that the architecture is already running as many programs as it can electrically support), user.pool::uid
is unchanged and the N
flag is set.
return.to.user.pool::
Input | user.pool::uid |
% | |
Time complexity | O(1) |
This scope is called when a process id being retired. return.to.user.pool::
adds 256 to the extended user id in user.pool::uid
, thereby increasing its instance count by one. The result is written to the end of the user id pool’s circular buffer.
No checking is done to ensure the user id is not being duplicated in the pool. Presumably the kernel is handling process ids in a sane manner, and a check would require (with the chosen data structure) a linear search. Similarly, no check is done to ensure the pool is not full, which can only happen if an entry is duplicated or user id 0 (the kernel) is freed—either is bad.
setup.user.pool::
User ids, which identify a running process to the hardware, are eight bits. Extended user ids add another 28 bits to provide an instance counter for each user id.
setup.user.pool::
initializes the extended user id pool. See Memory structures for how the information is represented. The pool will initially contain extended user ids 1 though 255, meaning each instance counter is initially zero.
Scopes to schedule and multitask
deschedule::
Input | user.pool::uid |
% | |
Output | text.seg::result |
% | |
Time complexity | O(N) |
deschedule::
effectively terminates the program identified by extended user id user.pool::uid
. The only possible (non-panic) failure is if the identified program isn’t actually running, in which case text.seg::result
returns ...NAS`t
(not actually scheduled?). Otherwise, text.seg::result
will be zero.
The steps taken by deschedule::
are these:
1. The schedule entry is located for user.pool::uid
.
2. The extended user id is retired to the user id pool with its instance count incremented for its next use.
3. The extended user id in the schedule entry is replaced with 0, which is never a valid extended user id (because it indicates the superuser, and the superuser isn’t in the schedule).
4. The executable’s text segment is released. If its reference count reaches zero, the text segment is removed from code memory.
5. The multitasking preemption timer is disabled.
6. The defunct user’s page table is wiped.
7. The defunct user’s registers are wiped.
8. The defunct user’s stack is wiped.
9. The multitasking preemption timer is enabled.
What’s missing from deschedule::
is actual removal from the schedule dict, because the scheduler run.system::
is currently iterating over it. Instead, the scheduler sets a flag and removes all entries with zero extended user ids (from step 3) between epochs.
NOT RTOS READY. Most of deschedule::
’s time is spent freeing primary storage, but a little can be spent finding the program if a large number are running.
disable.timer::
This scope disables the multitasking preemption timer by setting ff tims
to all zeros. This is needed when the kernel decides to run in NPRIV
mode as a regular user to ensure the kernel is not interrupted untimely. As of 2 April 2024, the principal time this is needed is when the kernel is zeroing all registers for a not-yet-in-use or no-longer-in-use user.
Zeroing 512 registers for a user takes a little more than 2500 instructions. If the TIMER
setpoint is for its typical value of 65535 instructions, there is no need to disable the timer. But we do it anyway so that if someone customizes her kernel with an unusually short timer, it won’t break over this little thing.
There are some PEEK
and POKE
instructions in the kernel where the system switches to NPRIV
mode to copy registers between a user and superuser. disable.timer::
is not used or needed to safeguard these transitions. These are very short exposures, perhaps an instruction or two, in any event far shorter than any useful timer setpoint. Also, a call to disable.timer::
takes 39 instructions due to the serial transfer involved, so it’s not meant to protect super-short intevals.
disable.timer::
does not affect SETUP
or PRIV
mode, because the timer is electrically disabled in those modes.
enable.timer::
This scope enables the multitasking timer by setting ff tims
to the operating system’s (hardcoded) timer preset. This means that when the CPU is in NPRIV
mode, a hard limit is imposed on the number of instructions that will execute before control returns to the kernel. Because all instructions have exactly the same duration, limiting their number is equivalent to limiting their runtime.
enschedule::
Input | text.seg::filename |
text.seg::allow.priv |
|
% | |
Output | text.seg::result |
user.pool::uid |
|
% | |
Time complexity | O(N) |
enschedule::
effectively “starts a program.” What that means is:
1. The executable is loaded into memory if not already present, and its retention count is increased by one.
2. An extended user id (effectively a plaintext process id) is obtained for a new instance of the program.
3. A schedule entry is added for the instance. As of 2 April 2024, the entry contains only the extended user id and filename.
4. The call stack for the user is initialized. There are two parts to this initialization. First, the CALI
(call stack initialize) instruction ensures the stack pointer LFSR is nonzero. Second, a stack underflow-handling routine within the kernel is pushed on the stack. If reached, this routine will terminate the user.
5. The text segment’s start address is pushed on the user’s call stack.
At this point, the program is live and running like any other program, although the kernel still has the CPU. The program will “resume” from the beginning when its turn comes up in the scheduler.
resume.user.program::
When the kernel is ready to transfer control to the currently-selected user, all that is necessary is to NPCALL
a scope that contains only a REVERT
instruction. resume.user.program::
is that scope.
For a more in-depth explanation, take a look at the Preemptive multitasking documentation, which has a minimal working example that includes a resume.user.program::
scope.
run.system::
run.system::
is Osmin’s main loop to distribute timeslices to users and service API requests. It has no input registers other than the kernel’s ordinary state, and the only exit from this loop is to shut down the OS and wipe all primary storage.
run.system::
iterates through the schedule (list of users that are allowed to run) and switches to each in round-robin fashion. Control returns to the loop either by expiration of the multitasking preemption timer (most frequently) or after a user executes a YIELD
instruction. Both mechanisms arrive by the same electrical input, and Osmin cannot in itself distinguish between these cases.
Users make API calls by setting their own register #1, which the kernel names api.request
, to a nonzero value. Ordinarily at that point the user will YIELD
so that processing is not delayed. (YIELD
does have a latency of, as of 16 June 2023, two instructions. This number may change as the netlist design stabilizes. All YIELD
s should address this latency, such as by following YIELD
with a sufficient number of NOP
s.)
As of 2 April 2024, only two API functions are supported. The first is ...USU`t
(user stack underflow), which is a request to terminate the user. It’s not considered an error, because it’s the API call that’s issued if the user simply RETURN
s when it is done. The final RETURN
address points into the kernel within enschedule::
, which issues the API call on the user’s behalf. But the user can call the ...USU`t
API function whenever it’s ready to exit. No privilege checking is needed because every program has a right to terminate.
The other API function is ...SHU`t
, which halts the OS abruptly and wipes all primary storage. There is no privilege checking on this function! Why? This function’s purpose as of 2 April 2024 is to test that the kernel can distinguish between more than one possible API function requests. As the kernel matures, a privilege checking mechanism will be written into the API.
setup.schedule::
Output | v.addr::schedule |
% | |
Time complexity | O(1) (bounded by architecture’s limit of 256 users) |
This scope creates the schedule, the kernel’s list of running programming instances. Because nothing is running when this scope is called, it’s just a wrapper around dict.new::
. For the element format, see “Schedule” in Memory structures.
Scopes to manage collections
Collections (also called dicts) are implemented by what are known as the dict.*
scopes. This is a set of 11 scopes that manage collections. The collection memory format is described in Memory structures. Inputs and outputs for the dict.*
scopes are passed via the dict::
data-only scope described in Kept registers.
dict.add::
Input | dict::addr |
dict::key |
|
% | |
Output | dict::el |
N flag |
|
% | |
Time complexity | O(n) |
dict.add::
adds or overwrites a collection element with a specific key, subject to available space. Prior to calling dict.add::
, the address of the collection and key being added must be provided via the input registers.
1. If the collection already contains the key, dict::el
is set to the address of the first element that matches the key. The non-key fields of that element are zeroed, and the key field is left unchanged. The N
flag is cleared.
2. Otherwise, if the collection is already full, the key is not present and cannot be added. dict::el
is replaced with all ones, and the N
flag is set. The caller can check for this error by testing the N
flag immediately (preferred), or by testing bit 35 of dict::el
before it can change. (Note that Dauug|36 virtual addresses are never large enough to set bit 35.)
3. Otherwise, a new element with the key is appended to the collection, and dict::el
is set to the element’s address. The remaining fields of the element will be zeros, and the N
flag is cleared.
dict.add.dupe::
Input | dict::addr |
dict::key |
|
% | |
Output | dict::el |
N flag |
|
% | |
Time complexity | O(1) |
dict.add.dupe::
is a variant of dict.add::
that never searches for and never overwrites existing elements that may have the same key. Instead, dict.add.dupe::
provides an unconditional append to a collection, subject to available space. This may result in the collection containing more than one element that matches the key. Prior to calling dict.add.dupe::
, the address of the collection and key being added must be provided via the input registers.
1. If the collection is already full, a new element cannot be added. dict::el
is replaced with all ones, and the N
flag is set. The caller can check for this error by testing the N
flag immediately (preferred), or by testing bit 35 of dict::el
before it can change. (Note that Dauug|36 virtual addresses are never large enough to set bit 35.)
2. Otherwise, a new element with the key is appended to the collection, and dict::el
is set to the element’s address. The remaining fields of the element will be zeros, and the N
flag is cleared.
dict.get::
Input | dict::addr |
dict::key |
|
% | |
Output | dict::el |
N flag |
|
% | |
Time complexity | O(n) |
dict.get::
finds the first instance of a given key, if one exists, within a collection. Detection of the key-not-present condition is supported. Prior to calling dict.get::
, the address of the collection and key being sought must be provided via the input registers.
If the key is present in the collection, output dict::el
will be the address of the first element having that key, and the N
flag will be cleared. If the key is not present, the N
flag will be set, and output dict::el
will be all ones.
dict.get.at::
Input | dict::addr |
dict::key (repurposed to specify an index) |
|
% | |
Output | dict::el |
N flag |
|
% | |
Time complexity | O(1) |
dict.get.at::
computes the address of an element based on its ordinal index within a collection. Detection of the index-out-of-range condition is supported. Prior to calling dict.get::
, the address of the collection must be in dict::addr
, and the index being sought must be in dict::key
.
If dict::key
is less than the number of elements in the collection, output dict::el
will be the address of the dict::key
th element, and the N
flag will be cleared. Otherwise, dict::key
is out of range, so the N
flag will be set, and output dict::el
will be all ones.
dict.iterate::
Input | dict::addr |
dict::el |
|
% | |
Output | dict::el |
dict::key (repurposed to output an index) |
|
N flag |
|
% | |
Time complexity | O(1) per element |
O(n) entire collection |
Successive calls to dict.iterate::
return the address and index of the next element in a collection. This scope’s purpose is to loop over the elements of a collection. Detection of the no-more-elements condition is supported. Prior to calling dict.iterate::
for the first time, the address of the collection must be in dict::addr
, and dict::el
must be set to all ones. Successive calls will return the address of each element in dict::el
and that element’s index—not its key—in dict::key
. The N
flag will be clear each time an element is returned.
After dict.iterate::
returns the last element, the next call to dict.iterate::
will set the N
flag, and dict.el
and dict.key
will contain all ones. This implies that if dict.iterate::
is called yet again at that point, output will restart with the first element.
As a reminder, dict::key
returns an element’s index, not its key. But it’s easy to obtain the key by loading dict::el
. If you wish to reuse the dict::key
register for this, your code would look like:
call dict.iterate jump < out.of.elements dict::key = ld dict::el
If a dict.iterate::
loop in turn calls any dict.*
scopes, including possibly an inner dict.iterate::
, it is necessary to save and restore dict::addr
and dict::el
between dict.iterate::
calls to prevent the loop from losing its place.
It is permissible to modify elements as they are iterated over in any way desired, including modifying their key. But dict.iterate::
is incompatible with element removal. Never call dict.remove::
, dict.remove.all::
, dict.remove.at::
, dict.remove.multiple::
, or dict.remove.unchecked::
, during iteration. The supported way to remove elements during iteration is to change their key to a “remove this element” magic number that never occurs naturally as a key, then after iteration has finished call dict.remove.multiple::
with that magic number.
dict.new::
Input | dict::el.max |
dict::el.size |
|
data memory allocation registers | |
% | |
Output | dict::addr |
data memory allocation registers | |
kernel panic if out of memory | |
% | |
Time complexity | O(1) |
dict.new::
allocates and initializes a new collection with no elements. This scope allocates all memory the collection may ever use based on a stated maximum number of elements supplied in dict::el.max
, and the number of words in each element supplied in dict::el.size
. Because dict.new::
and dict.get.at::
employ short multiplication instructions, dict::el.size
must be no larger than 63 (and can’t usefully be smaller than 1). The collection memory format is described under “Collections” in Memory structures. The principal output of dict.new::
is a pointer to the new collection in dict::addr
.
Memory allocation for dict.new::
uses virtual memory that is backed by physical memory. The kernel has a standard mechanism for handling this that is used for more than just collections. Several registers are involved, but this process is transparent to the caller of dict.new::
.
If there is not enough memory for dict.new::
to create a collection, the Osmin kernel will not be able to function as designed. This situation is highly improbable, because kernel memory requirements are very low compared to the size of SRAMs currently on the market. Even so, dict.new::
does test for this condition and invokes a kernel panic if memory is insufficient.
No means is provided to deallocate any kernel memory, including deallocate a collection. This is intentional and makes the kernel simpler and more resilient. There is a dict.remove.all::
scope that can remove and wipe all elements of a collection, but the collection will remain available for re-use afterward. dict.new::
must not be used to reinitialize or empty a collection, because the old memory will not be reclaimed.
dict.remove::
Input | dict::addr |
dict::key |
|
% | |
Output | N flag |
dict::el is overwritten |
|
% | |
Time complexity | O(n) |
dict.remove::
removes the first element with the specified key from a collection. For speed reasons, removal does not preserve order. Instead, the element removed is copied over with the last element, then the last element is wiped with zeros, then the collection is shortened by one element.
Input registers dict::addr
and dict::key
specify the collection to remove from and key to remove, respectively. If no element matching the key is found to remove, nothing about the collection changes and the N
flag is set. Otherwise, the first matching element is removed and the N
flag is cleared.
This scope uses dict::el
internally, so its contents are lost.
dict.remove.all::
Input | dict::addr |
% | |
Output | dict::el is overwritten |
dict::key is overwritten |
|
% | |
Time complexity | O(n) |
dict.remove.all::
overwrites every element of a collection with zeros, and then records the number of elements in the collection as zero. This process restores a collection to exactly the same state—every last bit—that existed immediately after dict.new::
created the collection.
Input register dict::addr
specifies which collection to remove all elements from.
This scope uses dict::el
and dict::key
internally, so their contents are lost.
dict.remove.at::
Input | dict::addr |
dict::key |
|
% | |
Output | N flag |
dict::el is overwritten |
|
% | |
Time complexity | O(1) |
dict.remove.at::
removes an element based on its ordinal index within a collection. Detection of the index-out-of-range condition is supported. For speed reasons, removal does not preserve order. Instead, the element removed is copied over with the last element, then the last element is wiped with zeros, then the collection is shortened by one element.
Prior to calling dict.remove.at::
, the address of the collection must be in dict::addr
, and the index of the element to remove must be in dict::key
. If the index is smaller than the number of elements presently in the collection, the element at that index is removed as described in the previous paragraph, and the N
flag is cleared. Otherwise, the index is out of range, so the collection is left unchanged, and the N
flag is set.
This scope uses dict::el
internally, so its contents are lost.
dict.remove.multiple::
Input | dict::addr |
dict::key |
|
% | |
Output | N flag |
dict::el is overwritten |
|
% | |
Time complexity | O(n) |
dict.remove.multiple::
removes every element with the specified key from a collection. For speed reasons, removal does not preserve order. Instead, the order will be as if dict.remove::
is called repeatedly until the key is no longer present in the collection. If there are no elements matching the key to be removed, the N
flag is set by this scope. Otherwise, the N
flag is cleared.
This scope uses dict::el
internally, so its contents are lost.
dict.remove.unchecked::
Input | dict::addr |
dict::el |
|
% | |
Output | none |
% | |
Time complexity | O(1) |
dict.remove.unchecked::
removes an element that has an already-known address from a collection. Prior to calling dict.remove.unchecked::
, dict::addr
must be the address of a collection, and dict::el
must be the address of an element that currently exists in said collection. For speed reasons, removal does not preserve order. Instead, the element removed is copied over with the last element, then the last element is wiped with zeros, then the collection is shortened by one element.
Scopes to display data structures
These scopes are merely for diagnostic use, will eventually be removed, and are subject to change.
Some of this section’s output examples came from short tests that used tiny code page sizes. Don’t worry about the data presented.
dump.code.page.chain::
This scope displays a linked list of code pages. These can start either from a text pool entry or the code pool. The format begins with a chain`t
marker and ends with a single newline (no blank line). Here is a sample chain with four pages:
chain`t 213000003000`o 213000003020`o 213000003040`o 213000003060`o
Note that code pages are linked not by pointers, but by JUMP
instructions (213 octal).
dump.code.pool::
This scope displays on one line:
- the number of free code pages in decimal
- the marker
c.free`t
- for each free page, a
JUMP
instruction to it in octal
A blank line follows the output. Here is a sample, truncated from 169 JUMP
instructions:
169 c.free`t 213000002560`o 213000002600`o 213000002620`o 213000002640`o 213000002660`o 213000002700`o 213000002720`o 213000002740`o 213000002760`o 213000003000`o 213000003020`o
At the time of this writing, the JUMP
opcode is 213 octal.
dump.schedule::
This scope displays the kernel’s schedule of running programs, which as of 23 March 2024 contains only the extended user id and executable filename of each. The listing begins with the marker schedu`t
and ends with a blank line. If the schedule is empty, only the marker and blank line will be output. Here is a sample:
schedu`t 000000101`h 0clock`t 000000102`h 0color`t 000000103`h 0groot`t 000000104`h 0color`t
In this schedule, the executables assembled from clock.a
, color.a
, groot.a
, and color.a
have user ids 1, 2, 3, and 4 respectively. Note that two instances of color.a
are executing, although only one copy is in code memory.
The 0000001
high bits indicate this is the first time each of these user ids have been assigned since the kernel started.
Note that extended user ids are obfuscated when they pass through the API, so what dump.schedule::
shows may not match other trace output that you add.
dump.text.pool::
This scope lists the text segments (executable programs) that are currently loaded in code memory. The format starts with the marker t.pool`t
, ends with a blank line, and looks like this:
t.pool`t 0clock`t 213000000000`o 000001`t 0 0color`t 213000002420`o 000002`t 0 0groot`t 213000002500`o 000001`t 0
In this pool, the executables running are assembled from clock.a
, color.a
, and groot.a
, and their first instructions are at code addresses 0, 2420, and 2500 octal. The high 213 octal is a JUMP opcode to these addresses.
If you’re curious how clock.a
was loaded in the zero page, that page of memory was not available for the kernel because the boot loader was there when the kernel was loaded. The page became available when the boot loader finished.
The executables for clock.a
and groot.a
are retained once because one copy of each is running. Two copies of color.a
are running. If bit 35 is set for any of these counts, the corresponding program(s) are “held” in memory indefinitely, but this example does not show any such.
The three zeros in the last field indicate that none of these programs is privileged. Privilege would be indicated by a 1.
Scopes to display test progress
blink::
This scope does a paravirtualized sleep for 1.074 seconds to get the developer’s attention. (The simulator output is typically scrolling like crazy. A one-second pause is extremely conspicuous.)
internal.error::
Kernel panic. This scope is an infinite loop that prints internal.error::code
about once per second. It does not return. The output looks like:
...oom`t ...oom`t ...oom`t ...oom`t ...oom`t
Osmin error codes are restricted to 18 bits so that a single IMH
instruction can load them. The high 18 bits are all ones. Recoverable errors (e.g. API call mistakes) are uppercase tetrasexagesimal, and kernel panic errors (the ones supplied to internal.error::
) are lowercase tetrasexagesimal.
As of 23 March 2024, defined kernel panics are as follows. All are bizarre situations that you’ll likely never see happen. Mentions here of “physical page” refer to data memory.
...cod`t |
kernel consumes all code memory, so no programs are possible |
...cor`t |
physical page pool is corrupt |
...ens`t |
enschedule:: can’t locate a just-added text pool entry |
...lfp`t |
attempt to free an unallocated physical page |
...mem`t |
data memory is too small to hold kernel data |
...ndm`t |
no data memory is installed |
...ppa`t |
invalid physical page address encountered |
...ppi`t |
invalid physical page index encountered |
...re0`t |
attempt to free the “zero page” that protects unused virtual pages |
...rfp`t |
attempt to “further” retain an unretained physical page |
newline::
This scope prints a newline, ASCII 10.
space::
This scope prints a space, ASCII 32.
trace::
This scope is a helper function to make it easier to print diagnostic messages. It wraps some PVIO
instructions and is kind of awkward. Here is a use example:
trace::radix = oct`t trace::label = jumpTo`t trace::n = text.seg::start.jump call trace call newline
Here is what the output looks like:
jumpTo`t 213000002420`o