-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add new independent test to support running shell commands called "shellcommand" #150
Comments
|
@shawndwells Thanks for your input, we will check out the Script Check Engine for lessons learned there. As for Windows we plan for a windows specific windows;powershellcommand test (which essentially mirrors the unix:shellcommand, but allows for specific backend support for differences in the environments. We had not considered Cisco IOS, IOS XE or NX-OS as in scope for this, given the Cisco specific schemas which allow for running any global/line/interface command, I didn't see where Cisco was needing this type support. |
Please find the draft proposal for the unix shellcommand test attached. The introduction (Background) section of the attached PDF follows. Background OVAL test types have been relatively specific in their targeted functions and, typically, have been applicable to a specific system, sub-system, or application on a target machine. The time and effort required to add new tests to the OVAL specification and the relative scarcity of resources impede efficient development and support of the specification. To counter this and to broadly expand the breadth of OVAL tests in general, we propose two new OVAL tests, one each for the UNIX and Windows families, the Unix shell_comand test and the Windows powershell_command test. Each is much broader in their application than the previously narrow tests currently available in OVAL allowing a content author to use the full capability of the Unix shell (e.g. bash) or the Windows powershell to retrieve system characteristics. This document proposes the Unix shell_command test, specifically. A second proposal covers the Windows powershell_command test. Since this test requires the running of code supplied by content and since SCAP applications commonly run with elevated privileges, significant responsibility falls on the content author to do no harm. This also requires that any content stream that employs one of these tests MUST be from a known trusted source and be digitally signed. The use of any executables that are not supplied by the installed operating system is highly discouraged. The advantages of these proposed tests include: Once these tests are added, there will be a significant reduction in the need for other new OVAL tests and the resources required to develop and implement them. The proposal document, example XML schema files, example content, and example results are attached. Hmmm..... GH failing to accept my uploads and is not complaining. Supporting files will be available shortly. |
Attaching proposald document, schemas, sample content and sample results for @johnulmer-oval |
@OVAL-Community/oval-board-members we are looking for your high level feedback on the unix:shellcommand proposal, once we get some indication of approval, we will start the PR process. |
@solind and/or @A-Biggs (feel free to tag others), but we would really like for you to beat up our proposal and let us know what you think. If we get a general thumbs up, we'll do a pull request and ask for a review there, but wanted overall gut feeling first. We pondered if we should include subexpression support, and our current design does not. Looking for feedback. |
Hi @vanderpol , I think you should consider adopting the CIS equivalent to this test, the You can find the schemas for that test here: |
@solind, we did review the CIS equivalent test, and had no idea from the documentation included that it was an independent test, we assumed it was a Unix only test for using a unix shell. Given the fact that there doesn't seem to be any proposal explaining the specification, or sample content/results and the documentation in the XSD are essentially "TO DO: Add some documentation", it's far from a spec that seems ready for prime time. Also given that CIS has indicated that they are dropping OVAL support going forward, and their content is not publicly available, and this is our "one chance" to update OVAL for the foreseeable future, I think we should adopt whatever specification will stand the test of time. We pondered making a single shell test that would be independent and be for Windows and UNIX, and if you look at our proposals, they are nearly identical, but making separate tests allows for platform specific documentation, and the potential for platform specific improvements later in the future. And behind the scenes of every "independent" OVAL test is platform specific implementations, so the "savings" of an independent test are merely in the trivial amount of time required to make the XSD files, which is already done. In reviewing the technical aspects of the CIS shellcommand test, we found the inclusion of the exit_status and the std_errrline to be interesting, but not really useful from a content/results perspective, so we intentionally excluded them, and let the application handle any errors. We didn't see any content needing to put those attributes in a state, nor any end user needing to see them. Once again, when looking at the documentation for the system characteristics item "TO DO: Add some documentation" didn't really feel ready for prime time. If there is some content created for the CIS shellcommand, I feel it would be a trivial effort to write a conversion script/transform to convert it to whatever is deemed worthy of official inclusion in OVAL. The same would hold true for supporting a new/replacement shell test vs the CIS, most of underlying logic would be identical. |
IIRC, there was never a formal proposal for it because the idea of a
"shellcommand" construct was shot down pretty quickly, due mostly to
reasoning around "you can run any command" including things that could
potentially be malicious. I can only guess that is the reason for my FINE
documentation of "TODO: Add some documentation".
CIS did have some benchmarks that utilized the shellcommand constructs, and
I've attached an (old) CentOS OVAL file that has some examples. Again,
digging deep into my "memory archives", I believe the "exit_status" was to
allow for tests that could potentially pass due to a failure in the
command. If for whatever reason, no results were produced by the command
and the exit_status was non-zero, perhaps that could be a "pass". As for
the "stderr_line", I feel like that was used mostly for producing evidence
in HTML reports, in case output was generated to stderr instead of stdout.
It doesn't look like any of the tests in the attached OVAL use either of
those elements, though... but I am pretty sure the "stderr_line" element
was mostly for evidence and not for any state/item comparison purpose.
Hope that helps!
-Bill M.
…On Wed, Oct 30, 2024 at 6:00 AM Jack Vander Pol ***@***.***> wrote:
@solind <https://github.com/solind>, we did review the CIS equivalent
test, and had no idea from the documentation included that it was an
independent test, we assumed it was a Unix only test for using a unix
shell. Given the fact that there doesn't seem to be any proposal explaining
the specification, or sample content/results and the documentation in the
XSD are essentially "TO DO: Add some documentation", it's far from a spec
that seems ready for prime time. Also given that CIS has indicated that
they are dropping OVAL support going forward, and their content is not
publicly available, and this is our "one chance" to update OVAL for the
foreseeable future, I think we should adopt whatever specification will
stand the test of time.
We pondered making a single shell test that would be independent and be
for Windows and UNIX, and if you look at our proposals, they are nearly
identical, but making separate tests allows for platform specific
documentation, and the potential for platform specific improvements later
in the future. And behind the scenes of every "independent" OVAL test is
platform specific implementations, so the "savings" of an independent test
are merely in the trivial amount of time required to make the XSD files,
which is already done.
In reviewing the technical aspects of the CIS shellcommand test, we found
the inclusion of the exit_status and the std_errrline to be interesting,
but not really useful from a content/results perspective, so we
intentionally excluded them, and let the application handle any errors. We
didn't see any content needing to put those attributes in a state, nor any
end user needing to see them. Once again, when looking at the documentation
for the system characteristics item "TO DO: Add some documentation" didn't
really feel ready for prime time.
If there is some content created for the CIS shellcommand, I feel it would
be a trivial effort to write a conversion script/transform to convert it to
whatever is deemed worthy of official inclusion in OVAL. The same would
hold true for supporting a new/replacement shell test vs the CIS, most of
underlying logic would be identical.
—
Reply to this email directly, view it on GitHub
<#150 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AB7K53E7V5DMPCPKL22Q47LZ6CU35AVCNFSM6AAAAABOSQD5JKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINBWGM4TSNRUGI>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
@vanderpol The need to capture both STDOUT and STDERR is quite real, as is the need to capture the exit code. Processes on both Windows and Unix have both these character output streams, and attempting to collect them together through redirection always introduces a potential race condition. Similarly, checking the exit code makes it possible to write a check whose outcome is determined by the success or failure of that process's execution, rather than its character output. These are all also collected in SCE, the "other" other way to run a command to produce a check result. The one thing that for sure isn't needed, I think, is the |
Thanks @wmunyan, and I wasn't meaning to throw you under the bus with your old shellcommand test documentation, was just trying to explain why we decided to go forward with our own, slightly differently named, but very similar test. I'm not sure your attached content made it, if you can try to re-attach it again, that would be much appreciated, especially if there's some sample output to go with it. |
@solind, thanks for the clarification on stdout/err and exit codes, I'll let @johnulmer-oval ponder this and chime in later. I'll also let John respond back on the 'label' element, as there's a subtle distinction between it and the "comment" attribute, and he can elaborate. |
First off, I'm glad to have y'all question everything. Regarding the 'label' element. It exists only to provide a description of the information in the 'result' element in an item. We thought about using the object 'comment' but, our use of the object comment has been to descript the manner in which the object intends to collect system characteristics. If an oval object is trivial, this object comment could be used to describe what the object is intended to produce. However, for a non-trivial object, it is useful to be able to use the object comment to describe the function of the object. The 'label' is then needed to describe the 'thing' that is retrieved by the object 'command'. Regarding the stdout/stderr issue, I think there is some language in the proposal saying that stdout is caught and goes into the item 'result' element. Any output to stderr would cause an OVAL processing error and would be handled like any other oval processing error. Clearly, it would be trivial to add another element to the item type to catch stderr. My approach to designing this test has been to keep it as concise as possible. I may have made it to concise? |
@johnulmer-oval Some output in stderr isn't necessarily an indication of a major problem. For example, if you're doing something like running a My view of the comments is that the object comment should describe what the object collects, the state comment should describe what it's checking for, and the test comment should describe what the combination of the object and state comparison is intended to discern. And criteria, criterion and extend_def tags also have comment attributes... but of course, not everybody uses comments consistently, because they are not material to the evaluation. Entities, on the other hand, are always material to the evaluation. That's why, to me, it doesn't make sense to put what's effectively a comment into an entity like that. It just feels wrong. |
@solind John and I have been pondering this recently and would like to add it to the list for your thoughts. Is adding subexpression support, making the test like testfilecontent54 beneficial, or does it make it complicated? I would not be mandatory, just available if needed. I was searching though the DISA STIGS recently looking for a good example of a requirement that would clearly benefit from subexpression support, see below. This could be automated by the current shellcommand method, but the shell script would need to be more complex, and I'm not sure if the results would be as clear.
Adding subexpression support makes the subexpression be what a human would do, and the script being run could match exactly what the STIG says to gather, vs having the interpretation being in the script run. Thoughts? From my perspective the "lock-delay" is set to a value greater than "0", or if "idle-delay" is set to a value greater than "900" would be really straight forward to put as subexpression tests, but I'm not a shell script guru, maybe splitting the data and then doing numeric operations would be trivial there as well? |
@solind , @vanderpol , |
@vanderpol I believe I am on record as being in favor of making it possible to use variable values (as an alternative to files) as inputs for textfilecontent, xmlfilecontent and yamlfilecontent objects. That would open up many possible use-cases involving process output. WDYT? |
@solind, can you clarify your last statement on variable inputs, and how that would relate to the shell test? If it's related to subexpressions and the fact that you could then take 'clean' data from any arbitrary command and then use it as a variable for usage in other parts of OVAL, then I'm in full agreement, but want to make sure I'm not missing your entire point. |
Hi. Is there any plan to add a timeout to the shell command being executed? Specifically, what exactly happens if the command is executed and never returns? My suggestion in this case is to add a timeout field (which defaults on 0, meaning no timeout) and abort the command execution if the timeout expires. |
@DeafBDoor that's a great idea, and agree on the default 0 = no timeout, and we should document what the result would be if the timeout was met and the command terminated (assume status of error) |
Should the timeout be a part of OVAL, or be left to the tool/scanner to implement and handle timeouts? To me, that should be something that is specified via the tool/scanner configuration and not OVAL, as depending on the device being scanned, the timeout may vary a lot (ie running a command to search for a file on a drive). Seems weird to specify a timeout in the content where I see that as a performance parameter, and therefore should be handled at the tool/scanner level for it to determine what is acceptable and then kill the process after a timeout, and handle it accordingly. Timeout values causes us a lot of headaches when trying to have a single timeout value work in all cases, and have found that timeouts vary based on environments and even specific endpoints. |
@A-Biggs valid points as well, as we also already have timeouts built into our application when we spawn external commands. Very good discussions, and there's no simple right path to get us there. Timeout is back to TBD status, maybe best handled internally by the application, (which can give timeout options to the end user to change based on their environment) |
Based on @solind feedback regarding a single independent test vs a Windows and UNIX test, we are now considering adding a shell entity for the command, in order to make it clear to the content author, and provide more control. This shell entity would be a list such as cmd Where each could then be documented to explain how each functions and systems they support. cmd will only be Windows, and sh/bash/ksh would be unix only, but powershell could be cross platform as technically you can install powershell on linux. Unfortunately there wouldn't be any default value for this entity given that there isn't a single one that would work on Windows/LInux/Mac/Solaris, so the content author would have to specify it every time. Refer to #151 for more info on this discussion. IF we implemented this, would it make more sense to have a separate 'shell' element in the object, or have it be part of the command element? the SQL tests are similar in this regard, with their 'engine" which can be sqlserver, oracle, etc... and in that design it's an element of the object, such as: <independent-def:engine >sqlserver</independent-def:engine> so <shell_command_object xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#unix" id="oval:org.mitre.oval.test:obj:1" version="1" comment="Retrieve SELinux status via getenforce."> Or something like: getenforce Or should we just leave them as separate tests? I don't feel that strongly one way or another. |
I mean, what if you could reference an OVAL variable value instead of a file with these tests? Then, you could run a command that generates XML output (for example), have an OVAL variable consisting of an object_component referring to the stdout for that object, and then run some XPATH query against it using an xmlfilecontent_test. The main disadvantage is that the output would have to live in a variable value, which could bloat the size of the OVAL results file. |
On timeouts, I agree with Adam, application-specified configurable timeouts make the most sense. Timeouts, in my experience, tend to have to be tailored to the environment. The test should specify whether a timeout should trigger the creation of an error, or leave the object with a |
An aspect of what we're proposing is that the content author will have the full breadth of the system functions that are available via the shell at their disposal to gather sys chars. I'm not clear on how variable-izing the inputs to the filecontent tests would achieve that? How would you get at the 'gsettings get' stuff Jack posted above? |
How about the shell_command_object provides a list of entities, which has an attribute setting the script language used and providing the script itself as a value? The we may use the order of the items in the aforementioned list as a priority (or add a priority value). Additionally, state that implementations must ignore items in the list which it doesn't support. My idea is something like this:
This would demand a state which contains a list of outputs matching. But also gives an option to provide a single state able to match all the commands above (end-of-line being handled based on the platform which runs the command). |
I just added a command to execute an external script, based on environment variables available in the shell session. The core of the idea here is how to organize in a single place shell scripts made for different platforms, but which have similar/identical purposes. I also like your idea of reusing the XCCDF check-text value to execute as a script. |
@johnulmer-oval The difference is that these file-based objects embed more processing logic than is currently available in the built-in functions. Alternatively, I guess we could add functions for running XPATH or YAMLPATH or ... something you could use to grab a specific instance from a regex_capture (to replicate the capability of As for the gsettings stuff, I think that would be implemented as three distinct OVAL tests, one for each of the three commands. You'd string them together using OVAL |
This adds a potential attack vector for command injection from a malicious user. Of course, every command we run directly also has this potential, but I feel strongly that nothing executed locally should rely on a potentially untrusted script. Any system command or script that's run as part of a check should be included with the content itself and signed. SCE provides just such a mechanism already for XCCDF. |
To second what @solind just said, our goal would be to have the contents of the script inside of the oval document and the document be digitally signed (or the SCAP benchmark be digitally signed). If there was a reliable way to prevent executing external scripts, we would do so. (any suggestions?) With Windows, we could prevent .ps1, .bat, etc... but with Linux/Unix, I'm not sure if there is any 100% reliable way to determine if a file is a an external script. We may want to include this information as part of the specification that the script 'shall' be included in its entirety in the OVAL definition. |
No worries, you didn't throw me under the bus :) I tried replying to the github notification with the attachment, but I guess that didnt work. Trying right in GitHub now...Ah, had to rename it as |
Agree - the 3 gsettings would map easily to 3 criterion. I'm still not clear on what you're suggesting. Are you suggesting that our proposal(s) (with some modifications) could be implemented and that the item_field (in this case the item 'result' element) could be used to feed the other test types? If that's it, I'm all for it. 'Sounds like a good idea for another proposal. |
Yes, that's what I'm suggesting -- that you could use that output instead of file contents for running those other tests. It's a little out-there of an idea, perhaps. I wasn't so much proposing it as suggesting it as an alternative and more generalized way to achieve the added functionality that you'd mentioned adding to the new test under consideration. |
Thanks everyone for what from my view is one of the most active conversations we've had for many years on any OVAL topic. I'll try to recap what I think seems to be the general consensus, feel free to correct me if I missed something, or mis-read the consensus opinion:
Please give this a thumbs up if I've captured the key items from this chat thread. If I've missed something or misunderstood, please reply back and let me know and I'll take another attempt at summarizing all changes by COB today. |
@solind in my discussions with @johnulmer-oval as he implements the proposed changes listed above, the question of what to do with standard error in certain circumstances arose and I have something to ponder. If the command in question returns an exit status of "success" (whatever that translates to on Windows/Linux etc..), but if some data was also printed to stderror, should we give the content author the ability to 'ignore or exclude standard error messages if the command ran successfully?' The scenario of some large find command that always ends up printing some standard error messages but also returns back valid data that completes the test comes to mind. I could see this as a behavior that is default to false, but could allow results to be less confusing in certain scenarios. |
@solind and @johnulmer-oval and @A-Biggs, one thing to consider with changing the system characteristics of this test to be a single item with multiple stdout/stderror elements is that scripts that generate N items may be more challenging to understand which stdout items caused the failure, you may end up with 1000+ lines of stdout, but only 1 causing the failure and no way to know which one it is. Content in this scenario will likely need to be written from the 'no items shall exist' type mentality and only print to stdout, those that fail. With a separate item per line of standard out, each line can be clearly marked as true/false, showing why something failed. For single commands that return a single row of data, item/state comparison will work just fine. Seems like a catch 22. Either we make separate collected items for each row of standard out, and we have the potential for stderrror and exit code being duplicated across all collected items, or we have a single collected item, with N number of stdout/stderr lines, but no way to determine why something pass/failed depending on how the script/command works. |
Regarding the proposed shell attribute, I want to question the value of defining this as an enumeration or otherwise with "limited options". It would seem that an attempt at a comprehensive list would fall short, especially over time. What are the intended advantages of defining the list? |
@balleman-ctr Without defining what shell the underlying OVAL interpreter should use, the results from the command/script provided could differ in very subtle ways that might not be obvious to the content author. If the content author is manually testing their content using bash, and the OVAL interpreter is using 'sh', the results could be slightly different. Also knowing if this 'independent' test is intended to run on on Windows via powershell, or cmd is very handy as well. Given the ubiquity of 'bash', 'sh', 'powershell' and 'cmd', I suspect this list will be pretty much stable for the next 10 years, hopefully as long as OVAL exists. However, this list would be a trivial thing to change over time, not breaking any backwards compatibility, or requiring new renamed shell test versions, it can just be appended to, much list several other OVAL test with enumerated types (think auditing and user rights). 6.0.1 could have an updated list if needed etc... We could also put an 'autodetect' option and have it be the default, if content authors would prefer to let the tool figure it out. I think it would still be nice to have the collected item contain the shell that the tool actually used though, so you could see what the shell was actually used. |
@vanderpol - I agree the attribute should exist, I'm just wondering what perceived downside there is to leaving it open-ended. Allowing the content author to supply the exact interpreter (e.g., the path to the executable) may provide better assurance the content runs as expected, at least on a known target platform. Of course given the open nature of this test, you could just wrap one script in another to call whatever interpreter you needed, I suppose. |
@balleman-ctr You are correct that specifying the full command to the shell interpreter would provide assurance, but it also could make your content only run on specific OS versions, where letting he application find the matching interpreter might be a bit more flexible and scalable. For example, on Windows with Powershell, there are something like 7 different versions of powershell, and you can have more than one (or none) installed, and specifying the path that is present on your content development system may not match 100% of the systems in the wild. With our application we have logic that we used when creating the cmdlet test which finds all of the available powershell versions installed and then runs the cmdlet in the most recent version found on the system. For linux, solaris, etc... you might have pretty good luck with standard paths, but the past for Windows seems pretty long a painful to be entering in "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe", which may or may not be on every target system. Anyone else have an opinion on this topic? I don't have that strong of an opinion, but do feel that having the application find the interpreter seems easier and more reliable. Also the vendor could have a chance of testing all of the shell interpreters that they support before hand, and know what to expect from the output. /usr/bin/myshell may not operate the same as a standard shell, and may need some custom support. |
You could have a |
Shell commands are always OS/version-specific. This is one of the reasons we didn't want to allow them in OVAL, but we've decided to put it in because practical necessity trumps academic perfection. |
I mean, presumably the content author can control the command that's being run by the content, no? There's a lot that you can do in a shell command to get at whatever exactly you're trying to achieve with it. We can't design this test to accommodate an author's inability to craft a good command to run. Exit codes are numeric values that typically correspond to documented failure states for a command. That's why they are both inherently useful, and important to collect. They are often more meaningful than the character output. One thing that's nice about running a shell command vs an OVAL test is that someone could always just run it on a target device directly to debug what's happening. |
About item creation and stderr.... The intended target for this test is the vulnerabilities for which there are no existing directly applicable OVAL tests AND for which there is some relatively straight forward 'shell' approach to getting the desired system chars. That idea of a "relatively straight forward approach" is significant, in my mind. I don't see how we can anticipate every possible way a shellcommand could produce stderr output. Different system commands return info in different structures. Some very simple. Some in a layered structure. Some can produce a variety of structures. Consequently, I don't see a robust approach to accurately interleave stderr output/lines with stdout lines in a robust way. The proposed test will not be appropriate to all imaginable scenarios. It might be highly inappropriate at times. It will be up to the content author to decide how and when to employ this test. If their shellcommand produces too much noise, this test won't be a good solution. Since it will be incumbent on the content author to design their 'shellcommand' to return useful output, more responsibility/weight on the content author. If they want to use this test for more complex chores, they will have to be more capable at designing thier command or script. That would be the content author's choice. 'Sounds like I'm restating Solin's paragraph:
This test won't be a full script engine that covers all scenarios. But, it would still provide enormous utility and value. That said, I lean back toward keeping an expectation on the content author to design/write quality content. That would mean using commands in the shell that are ubiquitous and using them in conventional ways that will yield useable STDOUT and STDERR. Finally, I'm going back to the assertion that a line of STDOUT should produce a single OVAL item. The object 'pattern' can be used to select or deselect output. Or, 'grep' can be used in the command to filter (in or out). Each item would contain a stderr element that would contain the entire output to STDERR, if any, produced by the system call. |
I have updated the independent definitions and system_characteristics schema on the branch (https://github.com/OVAL-Community/OVAL/tree/150-add-new-independent-test-to-support-running-shell-commands-called-shellcommand). Changes include: Please see the schema for descriptions of the added elements. Example content and results files attached. |
Thanks @johnulmer-oval, I've created a pull request based on this update. If we have time, I'd like to see some sample content that exercises the pattern match and subexpression support. |
Hate to bother, especially @solind and @A-Biggs, but..... coming out of today's meeting Jack and I heard two different conclusions as to what the shellcommand item should look like. Given an object like:
The output to STDOUT would look like: Two options: 1 - One and only one item created that contains one or more stdout_line elements. Each line gathered via STDOUT gets its own '<stdout_line>' element (minoccurs="0" maxoccurs="unbounded").
OR 2 - One and only one item created that contain one and only one 'stdout_text' element (maxoccurs="1") which contains the entirety of the output retrieved via STDOUT. This would be a multiline string.
Thanks. |
Hi @johnulmer-oval , the CIS test specifies one entity per line of stdout and one entity per line of stderr. So, one item, multiple entities (the first one). See the schema: |
@solind thanks for your response. 100% on the same page for stdout_line. However, as our examples above vary slightly from the CIS, where we have 0-1 on std_err, and have it as one blob vs CIS having separate std_error_line per std error line returned. We don't really see any benefit to having them as separate entities, but I guess we just follow the CIS format for less change from the CIS extension? |
Yeah, I'm thinking we should just keep that aspect of it unchanged from the CIS test (which some other vendors already have implemented). I'd also add that, if there is no output at all in stderr (as in, 0 bytes of output), we might consider using the convention:
|
independent definitions and system_characteristics schema in the branch for this issue have been updated to reflect the direction from the last meeting. |
Abstract
Add an independent test called "shellcommand" which supports running any arbitrary command or contents of a shell script on the target system. This feature should only be used if the content is trusted, ideally digitally signed either at the benchmark level, OVAL document level, or at the object level.
This allows for many more system configuration and vulnerability tests to be performed with OVAL without making the language overly large by adding numerous system commands to OVAL. This also will allow for security personal not well versed with OVAL to be able to create OVAL content. It should also help to simplify some of the overly complex binary trees of tests that has plagued OVAL in the past.
Link to Proposal
Once a proposal has been put into pull request form, add a link to the PR here. If and as alternate proposals or objections are added they should be linked here as well.
Additional context
Add any other context or screenshots about the enhancement.
The text was updated successfully, but these errors were encountered: