-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Attempting to de-constrain memory module #639
Comments
Thanks for reporting! The problem is that the In the meantime, you should be able to directly store the static MAPPER: Mutex<Option<MappedPageTable<'static, Box(dyn Fn(PhysFrame) -> *mut PageTable + Sync)>> = Mutex::new(None); |
Doing that caused a hole host of issues. First the syntax wasn't valid on your statement. Then I needed to pull on alloc::boxed::Box. Then I had to change that to: static ref MAPPER: Mutex<Option<MappedPageTable<'static, Box<dyn Fn(PhysFrame) -> *mut PageTable + Sync>>>> = Mutex::new(None); That, in turn, caused a huge number of issues, mainly:
|
Have you tried adding |
Taht worked. But holy hell is this code disgusting. |
Perfect! Often you can directly follow the compiler errors:
Just introduce a type alias for the type PhysToVirt = Box<dyn Fn(PhysFrame) -> *mut PageTable + Send + Sync>;
static ref MAPPER: Mutex<Option<MappedPageTable<'static, PhysToVirt>>> = Mutex::new(None); |
Yeah, I'll probably do that. Make a type for the entire statement type. |
Ok, I think we can close this issue given that it's working now? |
Yeah, just need to fix code that I wrote to work. |
OK, so this isn't working very well. I've defined a type for the disgusting code: type PageTableMapper = MappedPageTable<'static, Box<dyn Fn(PhysFrame) -> *mut PageTable + Sync + Send>>; And that works: static ref MAPPER: Mutex<Option<PageTableMapper>> = Mutex::new(None); What doesn't work is when I go to actually initialize the mapper to something that I can use. The following doesn't work: pub fn init(physical_memory_offset: u64, memory_map: &'static MemoryMap) {
let mut mapper = MAPPER.lock();
let mut allocator = FRAME_ALLOCATOR.lock();
*mapper = Some(unsafe { PageTableMapper::from(init_mapper(physical_memory_offset)) });
*allocator = Some(unsafe { GlobalFrameAllocator::init(memory_map) });
} And fails with
I can't use an as type cast, because that doesn't work with complex types. I can't just do: *mapper = Some(unsafe { init_mapper(physical_memory_offset) }); Because it gives me that same blasted error. The function its calling is the same (albeit renamed) that is blog_os: unsafe fn init_mapper(physical_memory_offset: u64) -> impl MapperAllSizes {
// Get active L4 table
let (level_4_table, _) = get_active_l4_table(physical_memory_offset);
// Convert the physical frame address into a virtual one
let phys_to_virt = move |frame: PhysFrame| -> *mut PageTable {
let phys = frame.start_address().as_u64();
let virt = VirtAddr::new(phys + physical_memory_offset);
virt.as_mut_ptr()
};
// And return it
MappedPageTable::new(level_4_table, phys_to_virt)
} Any ideas? |
Your To fix it, change the return type to your unsafe fn init_mapper(physical_memory_offset: u64) -> PageTableMapper {…} You will need to box your |
OK, I'll try some combinations and close this issue if I get it resolved. |
No, this isn't working. This function: pub fn allocate_heap(start: u64, size: u64) {
// Acquire mapper and allocator locks
let (mapper, allocator) = (MAPPER.lock(), FRAME_ALLOCATOR.lock());
// Determine if we've initialized the mapper and allocator yet
if mapper.is_some() && allocator.is_some() {
// Create a heap page range
let mut mapper = mapper.unwrap();
let mut allocator = allocator.unwrap();
allocate_paged_heap(start, size, &mut mapper, &mut allocator).unwrap();
} else {
panic!("Attempted to allocate paged heap when mapper and alocator weren't initialized");
}
} Fails with:
This is odd; I thought Option<...> implemented clone and copy traits? If I add .as_mut() to it, a different error occurs: error[E0277]: the trait bound `&mut x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'static, memory::alloc::boxed::Box<(dyn core::ops::Fn(x86_64::structures::paging::frame::PhysFrame) -> *mut x86_64::structures::paging::page_table::PageTable + core::marker::Send + core::marker::Sync + 'static)>>: x86_64::structures::paging::mapper::Mapper<x86_64::structures::paging::page::Size4KiB>` is not satisfied
--> src\memory.rs:150:1
|
150 | allocate_paged_heap(start, size, &mut mapper, &mut allocator).unwrap();
| ^^^^^^^^^^^^^^^^^^^ the trait `x86_64::structures::paging::mapper::Mapper<x86_64::structures::paging::page::Size4KiB>` is not implemented for `&mut x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'static, memory::alloc::boxed::Box<(dyn core::ops::Fn(x86_64::structures::paging::frame::PhysFrame) -> *mut x86_64::structures::paging::page_table::PageTable + core::marker::Send + core::marker::Sync + 'static)>>`
|
= help: the following implementations were found:
<x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'a, PhysToVirt> as x86_64::structures::paging::mapper::Mapper<x86_64::structures::paging::page::Size1GiB>>
<x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'a, PhysToVirt> as x86_64::structures::paging::mapper::Mapper<x86_64::structures::paging::page::Size2MiB>>
<x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'a, PhysToVirt> as x86_64::structures::paging::mapper::Mapper<x86_64::structures::paging::page::Size4KiB>>
note: required by `memory::allocate_paged_heap`
--> src\memory.rs:42:1
|
42 | fn allocate_paged_heap(start: u64, size: u64, mapper: &mut impl Mapper<Size4KiB>, frame_allocator: &mut impl FrameAllocator<Size4KiB>)->Result<(), MapToError> {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ I've boxed the phys_to_addr function as you said: unsafe fn init_mapper(physical_memory_offset: u64) -> PageTableMapper {
// Get active L4 table
let (level_4_table, _) = get_active_l4_table(physical_memory_offset);
// Convert the physical frame address into a virtual one
let phys_to_virt = Box::new(move |frame: PhysFrame| -> *mut PageTable {
let phys = frame.start_address().as_u64();
let virt = VirtAddr::new(phys + physical_memory_offset);
virt.as_mut_ptr()
});
// And return it
MappedPageTable::new(level_4_table, phys_to_virt)
} |
Update: solved the problem, thanks to a not-so-desimilar problem over at https://www.reddit.com/r/rust/comments/bbjoqq/how_to_fix_value_used_here_after_move_error_in pub fn allocate_heap(start: u64, size: u64) {
let (mapper, allocator) = (MAPPER.lock(), FRAME_ALLOCATOR.lock());
match (*mapper, *allocator) {
(Some(mut m), Some(mut a)) => {
allocate_paged_heap(start, size, &mut m, &mut a).unwrap();
},
_ => panic!("Cannot acquire mapper or frame allocator lock!")
}
} This (IMO) makes this function simpler and easier to read. But it still throws the same error in the previous comment. I'm just confused now -- this doesn't make much sense to me. |
If you can't move take ownership of an value ("cannot move out of …"), try using a reference instead. For example by using |
Neither worked. I may need to rid the code of the mutex in this instance, even though I really don't want to do that. |
The lock method returns a guard that derefs to a normal mutable reference, so you gain nothing by removing the mutex. Try the Option::as_mut() method |
Changed it to: pub fn allocate_heap(start: u64, size: u64) {
let mut mapper = MAPPER.lock();
let mut allocator = FRAME_ALLOCATOR.lock();
match (mapper.as_ref(), allocator.as_ref()) {
(Some(mut m), Some(mut a)) => {
allocate_paged_heap(start, size, &mut m, &mut a).unwrap();
},
_ => panic!("Cannot acquire mapper or frame allocator lock!")
}
} And get:
If I change it to as_mut(), I get something similar:
I'm sorry if this is newbie stuff, its just incredibly confusing. |
OK. One last issue. I'm trying to initialize the memory allocator. The problem is that my page mapper uses Box<>, which requires a memory allocator (I think). If this is going to work, I'll need to map physical memory into the allocator instead of using memory pages. Thoughts/ideas/suggestions? |
See https://os.phil-opp.com/heap-allocation/#creating-a-kernel-heap on how to map pages for the kernel heap. You can do that with the mapper instance before you initialize your static |
Yes, but according to that code, I need an already existing mapper. I can't map the page range before I initialize the mapper with the mapper, because the mapper requires a heap, and I don't have a heap until I initialize the mapper. (That's very confusing.) type PageTableMapper = MappedPageTable<'static, Box<dyn Fn(PhysFrame) -> *mut PageTable + Sync + Send>>; And this static static ref MAPPER: Mutex<Option<PageTableMapper>> = Mutex::new(None); which expands to static ref MAPPER: Mutex<Option<MappedPageTable<'static, Box<dyn Fn(PhysFrame) -> *mut PageTable + Sync + Send>>>> = Mutex::new(None); That's fine. It... half works. I have my init function: pub fn init(physical_memory_offset: u64, memory_map: &'static MemoryMap) {
let mut mapper = MAPPER.lock();
let mut allocator = FRAME_ALLOCATOR.lock();
*mapper = Some(unsafe { init_mapper(physical_memory_offset) });
*allocator = Some(unsafe { GlobalFrameAllocator::init(memory_map) });
let mut start_addr: u64 = 0;
let mut end_addr = 0;
for region in memory_map.iter() {
if region.region_type == MemoryRegionType::Usable {
start_addr = physical_memory_offset + region.range.start_addr();
end_addr = physical_memory_offset + region.range.end_addr();
while ((end_addr - start_addr) % 32768) != 0 {
end_addr = end_addr - 1;
}
break;
}
}
allocate_heap(start_addr, end_addr - start_addr);
} and that function calls init_mapper(): unsafe fn init_mapper(physical_memory_offset: u64) -> PageTableMapper {
// Get active L4 table
let (level_4_table, _) = get_active_l4_table(physical_memory_offset);
// Convert the physical frame address into a virtual one
let phys_to_virt = Box::new(move |frame: PhysFrame| -> *mut PageTable {
let phys = frame.start_address().as_u64();
let virt = VirtAddr::new(phys + physical_memory_offset);
virt.as_mut_ptr()
});
// And return it
MappedPageTable::new(level_4_table, phys_to_virt)
} And that's the issue, I think: the Box<> requires a heap, but this function also requires a heap to allocate the mapper, but to get a heap we need to map a page with the mapper. Its confusing, but I hope this code illustrates the problem. |
Oh, I understand the issue now. This is indeed a bit tricky. I'm about to add a new To try it, change the version of use x86_64::structures::paging::OffsetPageTable;
pub unsafe fn init(physical_memory_offset: u64) -> OffsetPageTable<'static> {
let level_4_table = active_level_4_table(physical_memory_offset);
OffsetPageTable::new(level_4_table, physical_memory_offset)
} Your static ref MAPPER: Mutex<Option<OffsetPageTable>> = Mutex::new(None); |
Gloriously, that worked! My statics now look like: lazy_static! {
/// The page table mapper (PTM) used by the kernel global memory allocator.
static ref MAPPER: Mutex<Option<OffsetPageTable<'static>>> = Mutex::new(None);
/// The global frame allocator (GFA); works in conjunction with the PTM.
static ref FRAME_ALLOCATOR: Mutex<Option<GlobalFrameAllocator>> = Mutex::new(None);
} Unfortunately, actually mapping the heap is not. My kernel is being mapped in the really high regions of memory; the boot physical memory offset (or PMO as I call it) is 0xFFFFFC0000000000. The parent of that page is apparently a huge page, which is prohibiting mapping to that page range:
I have absolutely no idea why the boot loader is assuming such a ridiculous offset as that. (Isn't that offset outside the range of physical memory on any computer in existence?) |
Could you show me what you are trying to do? Sounds like you're confusing physical and virtual memory. The virtual address space always 2^48 bytes large and completely independent of the amount of physical memory. |
Yep. I have a function, allocate_heap, that allocates me a heap: pub fn allocate_heap(start: u64, size: u64) {
let mut mapper = MAPPER.lock();
let mut allocator = FRAME_ALLOCATOR.lock();
match (mapper.as_mut(), allocator.as_mut()) {
(Some(m), Some(a)) => {
allocate_paged_heap(start, size, m, a).unwrap();
}
_ => panic!("Cannot acquire mapper or frame allocator lock!"),
}
} But I can't call that within my init function since that function holds the mutex lock. So here's my init function: pub fn init(physical_memory_offset: u64, memory_map: &'static MemoryMap) {
let mut mapper = MAPPER.lock();
let mut allocator = FRAME_ALLOCATOR.lock();
*mapper = Some(unsafe { init_mapper(physical_memory_offset) });
*allocator = Some(unsafe { GlobalFrameAllocator::init(memory_map) });
// We risk over-allocating, but that's much better than under-allocating here.
let mut start_addr: u64 = 0;
let mut end_addr = 0;
for region in memory_map.iter() {
if region.region_type == MemoryRegionType::Usable {
start_addr = physical_memory_offset + region.range.start_addr();
end_addr = physical_memory_offset + region.range.end_addr();
break;
}
}
// We cannot call allocate_paged_heap here since we hold the spinlock, which would result in an endless lock acquisition attempt loop. Instead we call the function directly here.
match (mapper.as_mut(), allocator.as_mut()) {
(Some(m), Some(a)) => match allocate_paged_heap_with_perms(
start_addr,
end_addr - start_addr,
m,
a,
PageTableFlags::PRESENT | PageTableFlags::WRITABLE | PageTableFlags::HUGE_PAGE,
) {
Ok(()) => (),
Err(e) => panic!("Cannot allocate primary heap: {:?}", e),
},
_ => panic!("Cannot acquire mapper or frame allocator lock!"),
}
} For reference, I have two functions:
Similarly, I have two other functions for mapping normal memory pages, allocate_page_range and allocate_page_range_with_perms. The code for allocate_paged_heap_with_perms looks like: /// Allocates a paged heap with the specified permissions.
/// Possible permissions are:
///
/// * Writable: controls whether writes to the mapped frames are allowed. If this bit is unset in a level 1 page table entry, the mapped frame is read-only. If this bit is unset in a higher level page table entry the complete range of mapped pages is read-only.
/// * User accessible: controls whether accesses from userspace (i.e. ring 3) are permitted.
/// * Write-through: if this bit is set, a "write-through" policy is used for the cache, else a "write-back" policy is used. Types of caching is explained below.
/// * No cache: Disables caching for this memory page.
/// * Accessed: set by the CPU when the mapped frame or page table is accessed.
/// * Dirty: set by the CPU on a write to the mapped frame.
/// * Huge page: specifies that the entry maps a huge frame instead of a page table. Only allowed in P2 or P3 tables.
/// * Global: indicates that the mapping is present in all address spaces, so it isn't flushed from the TLB on an address space switch.
/// * bits 9, 10, 11, and 52-62: available to the OS, can be used to store additional data, e.g. custom flags.
/// * No execute: forbid code execution from the mapped frames. Can be only used when the no-execute page protection feature is enabled in the EFER register.
fn allocate_paged_heap_with_perms(
start: u64,
size: u64,
mapper: &mut impl Mapper<Size4KiB>,
frame_allocator: &mut impl FrameAllocator<Size4KiB>,
permissions: PageTableFlags,
) -> Result<(), MapToError> {
let page_range = {
let heap_start = VirtAddr::new(start as u64);
let heap_end = heap_start + size - 1u64;
let heap_start_page = Page::containing_address(heap_start);
let heap_end_page = Page::containing_address(heap_end);
Page::range_inclusive(heap_start_page, heap_end_page)
};
for page in page_range {
let frame = match frame_allocator.allocate_frame() {
Some(f) => f,
None => panic!("Can't allocate frame!"),
};
unsafe {
mapper
.map_to(page, frame, permissions, frame_allocator)?
.flush()
};
}
Ok(())
} (Yes, I copied the descriptions for the flags from x86_64 crates docs, but that was because I couldn't find any other source that described them. I need to source it in that comment.) |
The virtual address range that we choose for our heap is arbitrary and completely independent of the physical memory. As noted in https://os.phil-opp.com/heap-allocation/#creating-a-kernel-heap:
You do this instead: // We risk over-allocating, but that's much better than under-allocating here.
let mut start_addr: u64 = 0;
let mut end_addr = 0;
for region in memory_map.iter() {
if region.region_type == MemoryRegionType::Usable {
start_addr = physical_memory_offset + region.range.start_addr();
end_addr = physical_memory_offset + region.range.end_addr();
break;
}
} This way, you choose the virtual address range of the physical memory mapping (setup by the bootloader). This violates the "as long as it is not already used for a different memory region" condition and is the cause of your error message. Just replace this code with: let start_addr = 0x_4444_4444_0000;
let end_addr = start_addr + 100 * 1024; The |
True. My reasoning of using that was I wanted to pick the first available "free" memory region, but I could also arbitrarily pick one. The issue then becomes "how do I detect whether its used or not?" Can I just check the memory map for that? |
The memory map tells you which physical memory regions are free. The page table tells you which virtual memory regions are free. Since page tables allow arbitrary mappings, these two regions can be completely different. You got an error while mapping because you picked a virtual memory region that is not free (it was already mapped by the bootloader). |
It works! Thank you! |
Great, you're welcome! |
Right now I have the memory module as given in blog_os with some modifications (mainly heap allocation, thanks phil-opp). I am attempting to remove the constraints on the module, i.e.: right now I can't dynamically allocate pages outside of that module without an external memory mapper and frame allocator object -- which I can't (and don't want to) have to pass around my entire kernel, and its not going very well.
I am trying to write to memory-mapped ports at address 0xFEBF0000 (Qemu's Intel HDA device). Unfortunately, this is not present, and so I understandably get a page fault. I am, therefore, attempting to alter this so I can map it in the HDA driver and not at kernel load time (i.e., when heap allocation is taking place). ' Trying this:
And then setting the mapper and frame allocator statics to Some() when they're set up by the kernel at runtime is failing with:
I've gotten this very error before when attempting to add a VFS driver with a trait that drivers could use to implement hooks for their own custom code so that the VFS layer is truly what it needs to be. I solved that by temporarily removing that part (it was a struct field), but I can't do that for this one. If the memory module remains how it is in the tutorial as-is, it will prohibit me from porting libraries like newlib or other C libraries that require functions like malloc(), free(), realloc() and calloc(). This is problematic, especially if I want to make any programs run on my OS, as all of them require the C stdlib.
How would I then be able to do this without having to completely rewrite my memory module? I would like to keep the structure but make it less difficult to use.
As a side note: I am not using the heap allocator that the tutorial on Heap allocation suggests, but am rather using the slab allocator that Redox uses.
My other option would be to rewrite the OS in C. But that would be extremely painstaking to do, not to mention I would lose all of the benefits Rust has. Does anyone have any ideas on how I can do this?
The text was updated successfully, but these errors were encountered: