-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ftrace does not work because mobj_get_cattr function fails #5198
Comments
Hi, @etienne-lms @jforissier @b49020 @jbech-linaro , Sorry to bother you, do you have any suggestions? |
Hello @zhanlej, I tried running
It looks like the issue is happening when the TA tries to send the ftrace buffer to Normal World (tee-supplicant). Maybe try reducing |
It may well be due to platform specific shared memory constraints. Have you enabled dynamic shared memory or just relying on fixed reserved shared memory approach? |
@jforissier @b49020 thank you for your reply.
I use version 3.16.0 of optee_os with no problems on QEMU, but it doesn't work properly on my hardware.
I set CFG_FTRACE_BUF_SIZE=4096 the result has no change
Sorry I don't know where to set the shared memory configuration you said, please tell me the location |
@zhanlej After looking more closely at your logs, it looks like you are hitting this problem with static shared memory approach. Following is an untested patch, give it a try if it solves your problem:
|
@b49020 thanks for the code. But when I modified it according to your code, OPTEE reported the following log:
From the log, it seems that the value of granule is incorrect, so I refer to the code of the mobj_reg_shm_alloc() function and added a line of code m->mobj.phys_granule = SMALL_PAGE_SIZE; to the mobj_shm_alloc() function; The detailed modifications are as follows:
Finally, the ftrace-*.out files were successfully generated in the tmp directory. But I have several new questions. Question 1Why doesn't this problem exist on QEMU? Question 2Because the configuration I use is
But the final generated ftrace.out only shows the last stack, it seems that a lot of content is ignored
Is there something wrong with my configuration? Question 3When I set CFG_SYSCALL_FTRACE=y to recompile and run on optee_os v3.16.0, although ftrace.out still shows very little data, it does not crash. But compiling and running with CFG_SYSCALL_FTRACE=y on ti-optee-os tag: ti2020.00 version will cause OPTEE to crash. The log is as follows:
Referring to the answer of #2256 (comment), I set STACK_THREAD_SIZE=16384 and set CFG_FTRACE_BUF_SIZE=2048 , the problem still exists Can you provide some help? |
You could try increasing |
Glad that it worked for you with these modifications. Feel free to create an OP-TEE PR with following:
Qemu is running with dynamic shared memory enabled, you would see below kernel message on Qemu but not on your platform:
So if you need to enable dynamic shared memory on your platform as well, you need to enable it as described here along with registering non-sec DRAM spaces with OP-TEE as an example here.
It looks like there is gap in understanding ftrace config options. For more details refer here but summary is: OP-TEE OS ftrace options:
TA ftrace options:
|
Indeed, I can confirm the same issue is observed if we set |
@jenswi-linaro Thank you for your reply, but OPTEE still crashes after modifying STACK_TMP_SIZE=16384. Do you have any other suggestions?
ok i will create a PR
I'm lacking knowledge about dynamic shared memory, so I don't really understand the meaning of these configurations right now, it may be improved in the future.
I understand what these config options mean. My real problem is when I set CFG_ULIBS_MCOUNT=y there is too little information in ftrace-*.out . I would like to see more complete optee os call stack information. Initially I suspected it was because the value of CFG_FTRACE_BUF_SIZE was too small, causing insufficient memory to log the information. After I increased the value of CFG_FTRACE_BUF_SIZE the contents of the ftrace-*.out records did not increase. Is there a way to increase the info of the optee os call stack in ftrace-*.out |
If some function calls seem to be missing, reducing the optimization level may help. Try setting
Try cleaning the TA, in particular remove the file |
This issue has been marked as a stale issue because it has been open (more than) 30 days with no activity. Remove the stale label or add a comment, otherwise this issue will automatically be closed in 5 days. Note, that you can always re-open a closed issue at any time. |
I found that the performance is very poor when using the RSA interface of OPTEE CORE, so I want to locate where the performance bottleneck is. I tried to open the ftrace function of OPTEE by referring to the ftrace documentation, but I encountered some problems, which are described as follows:
I am using a chip from TI, and TI provides ti-optee-os modified based on optee_os. Although there are some differences between the two code bases, the core logic of OPTEE should not be modified, because the bl32 compiled by me using github's optee_os can also run normally.
Hardware version: TI TDA4
OPTEE OS version: ti-optee-os tag ti2020.00 (similar to version 3.8.0)
OPTEE CLIENT version: ti-optee-client tag ti2020.00
OPTEE OS compilation parameters:
After compiling and burning to the device using the above compilation parameters, I did not find any ftrace related files in the /tmp/ directory of REE.
At first I thought it was the problem of ti-optee-client, but by looking at the code I found that CFG_FTRACE_SUPPORT ?= y in config.mk is enabled by default.
So I try to add some printing in ti-optee-os to find the problem, the modification record can be viewed: zhanlej/ti-optee-os@6a455cb
The final location to the log is as follows:
Through the log "E/TC:? 0 mobj_get_cattr:69" and code, it can be located that mobj->ops->get_cattr is not found in the mobj_get_cattr function. Since I don't know enough about the core OPTEE code, this is the problem I can find.
In addition, I use the same compilation parameters based on optee_os 3.16.0, and add some debug log: zhanlej@ed5e2e5, the compiled bl32 phenomenon is the same, the log is as follows:
Please help me explain the reason for this phenomenon.
The text was updated successfully, but these errors were encountered: