-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More comprehensive test coverage #9493
Comments
💯 |
This sounds like a fantastic idea, but I think I'm missing something: I fire up julia* from *I've tried using
|
I just did it, and it worked for me. Are you definitely using the julia you are building from source? |
Pretty sure. See https://gist.github.com/sbromberger/282ed8d69695b34d4279 for the log. ETA: I do see a
|
Why are you doing Anyway, I thought I'd try to give this a go, but I'm not getting sensible results from the coverage. For example, I got that - function intersect(s::Set, sets::Set...)
1 println(:intersect, s, sets)
0 i = similar(s)
0 for x in s
0 inall = true
0 for t in sets
0 if !in(x,t)
0 inall = false
0 break
- end
- end
0 inall && push!(i, x)
- end
0 return i
- end which is even harder to explain |
Because that's where the compiled julia (pre-
(On my system, |
OK, mystery solved. Because I moved the source root after the build, I have broken symlinks all over the place. Specifically, Why the symlinks within the build directory are absolute-pathed (at least on OSX) is a mystery to me. Far better, instead of having
in
so that the build directory is independent of the overall filesystem and the source root ( |
@timholy, step 7 appears to be less detailed than the other ones. Perhaps it should at least link to http://julia.readthedocs.org/en/latest/stdlib/test/ for those new to unit testing. (The rest of the list is crystal clear, thanks for doing that!) Additionally, CONTRIBUTING.md mentions "Make sure you test your code as described here", but the "here" is kinda ambiguous. Does it means the list that follows shortly afterwards, or was it meant to be a link, perhaps to the docs I pointed to above? |
@IainNZ: wow. OK, I'll take a peek and see if I can figure out what's happening. @waldyrious: very good suggestions. I've edited the post up top to include your link, and will look into editing CONTRIBUTING.md. |
@IainNZ, the following test script works for me:
I get useful coverage results with or without #9354 and
Does that work for you? If not, what platform and commit are you on? |
Hi, julia> include("runtests.jl") I had another question regarding running these tests: Currently, for personal devel stuff I prefer to use the nightly julia build from Elliot/staticfloat for Ubuntu-14.04-LTS. But, for the test coverage, is it kosher to use and build julia in the |
@timholy everything seems to be working fine, not sure what was happening before. Woops! |
@svaksha, thanks for giving this a spin. Does
I am not anything remotely resembling a build guru, so don't listen to me 😄. I'm not even sure where @staticfloat's binary installs. But, given those caveats: it should be OK to run |
@IainNZ, very glad to hear that! It is weird that it was giving you something so strange; if you can replicate, please do file an issue. |
@svaksha I routinely keep a "stable" version installed via package manager, (whether it be Homebrew on OSX, or aptitude on Ubuntu) and an "unstable" version just sitting in |
On Wed, Dec 31, 2014 at 12:56 AM, Tim Holy notifications@github.com wrote:
Hi and Thanks for the reply @timholy! If "fail reliably" means it ran
Hah, same here - my ppa install gets called via
Good point - will keep this in mind.
True, but I'm a huge fan of sandboxed testing/development which makes PS: @timholy, Would you mind sharing (in a blog post or a gist) how Thanks, SVAKSHA ॥ http://about.me/svaksha ॥ |
Hi and Thanks for the reply @staticfloat. On Wed, Dec 31, 2014 at 1:00 AM, Elliot Saba notifications@github.com wrote:
Got that. What isnt clear is how (if) the bash modifications affect
Could you elaborate a little more on what you mean by needing "./julia On an unrelated note, I recall seeing a thread about running Julia @staticfloat, Also wanted to thank you for the nightlies - they make Thanks, SVAKSHA ॥ http://about.me/svaksha ॥ |
I don't know that anyone has specifically tried this. It's been mentioned a few times, such as #4853 (comment) or a few times in relation to making the installation of IJulia easier, but I don't know if anyone has done any work in this direction. Maybe some people more familiar with the SciPy packaging ecosystem would know if it's happened yet from that side. I did once try using their BinStar binary packaging/building service, but it was (is?) very young and not too well documented and I couldn't figure it out. |
What I mean is this: My setup is to have a stable version in My setup is most useful if you do most of your work in a stable version of julia, but want to try something out on Does that make more sense?
I haven't heard anything about that recently, so if people are working on that right now, our paths haven't crossed.
I agree. Julia (and I'm only talking about things in this git repository; packages are another story) doesn't touch anything outside of Packages can do whatever they want, however. For instance, every time I run |
On Wed, Dec 31, 2014 at 5:56 AM, Elliot Saba notifications@github.com wrote:
Thanks for explaining that. Its what I'd like to (and was trying to)
If I run the above command without the
Is there any documentation on how one can setup such an isolated environment? SVAKSHA ॥ http://about.me/svaksha ॥ |
On Wed, Dec 31, 2014 at 5:38 AM, Tony Kelman notifications@github.com wrote:
Thanks for the reply. Found the SO thread: SVAKSHA ॥ http://about.me/svaksha ॥ |
@svaksha, from inside What I do is have |
@svaksha, I remembered your If you experience the same |
OK, I've substantially edited the instructions above; with these changes, I think we get reasonably-accurate results, as long as a human parses the However, I'm running into #9536, so (for me) currently this doesn't work. |
I would personally prefer a green badge, but I'm happy to defer to others. There are areas (the REPL, especially) that have overall poor test coverage so maybe we could open individual issues for those? |
I think this is one of those issues that's counterproductive to close until we hit 100%. After all, it is a great entry point for new developers, and we don't have that many good examples of that kind of issue. I'll be super-excited to see that color change to yellow! |
I agree, but when the number of comments becomes becomes high, it's probably time to archive the discussion and continue the work in a new issue with an updated summary on top. |
I agree. For other areas, it would be nice to have specific issues to direct people to write tests for those, and close the umbrella issue. |
I'd like to have a few separate issues like "Improve REPL tests" or "Flesh out LAPACK tests" and keep the information in the OP here as a Gist/README somewhere. It's probably pretty intimidating for some people to go to the "more tests" issue and see 150+ comments. |
If people prefer that, it's all fine by me. |
@kshyatt Great idea to add the Coveralls badge to the README. Go for it! |
It's me being a pain again - Coveralls hasn't run in 4 days. Are the buildbots still down? |
Thank you for the ping, I've restarted the latest builds after clearing out the build directory following multiple failures in a row over the last few days. We'll see if things shake themselves out. |
Things did not shake themselves out, I had to rebuild the CentOS 5 buildbots. The good news is, that's almost completely automated now. We've got a new coverage build up, but alas, still no git info making up there. @kshyatt I seem to be having difficulty communicating git commit information to Coveralls. My mini script for submitting coverage results is: r1 = load("coverage_noninlined.jld", "results")
r2 = load("coverage_inlined.jld", "results")
r = CoverageBase.merge_coverage(r1, r2)
git_info = @compat Dict(
"branch" => Base.GIT_VERSION_INFO.branch,
"remotes" => [
@compat Dict(
"name" => "origin",
"url" => "https://github.com/JuliaLang/julia.git"
)
],
"head" => @compat Dict(
"id" => Base.GIT_VERSION_INFO.commit,
"message" => "Merge pull request #11390 from ScottPJones/spj/deputf32"
)
)
println("git_info: ")
println(git_info)
Coveralls.submit_token(r, git_info) Which results in: (You can also look at the actual execution log)
Are you doing anything differently when you run things locally in order to generate the |
I'm just using CoverageBase.jl. It's whatever method that package uses, which is presumably the same thing that's running on the buildbots! I've run coverage on julia.mit.edu and had it work - something might be weird about the buildbot? |
Can you show me the results of your coverage run? |
I think we should close this issue - and have new issues for some of these discussions. |
As suggested in JuliaLang#9493.
As suggested in JuliaLang#9493.
A number of more general purpose parts of base are covered only by virtue of their use in other tests in the suite. Is it fair to say that each module should be tested fully by its own test script, such that the modification of unrelated tests does not have the unintended consequence of leaving untested code? |
I find it helpful to have tests that fail for obvious reasons rather than for indirect reasons. For example, if there's a problem with type inference, it's much nicer if it fails a direct test in |
👍 @samuelpowell @timholy Definitely saves a lot of time to have pinpoint tests. |
Excellent, I'll see what I can do. |
Let's close this in favor of #11885 and continuing targeted pushes. The instructions in the top post are still valuable, but the nearly-200 comments discussion is not so relevant going forward and takes a while to load. |
Updated Feb 23, 2015.
This would make a whole bunch of good "first contribution" projects, and does not require deep insider knowledge of the language.
The basic idea is to expand the test suite to make sure that julia's base code works as promised. Here is one recommended way to contribute toward this goal:
The easy way
test/runtests.jl
. http://julia.readthedocs.org/en/latest/stdlib/test/ may be helpful in explaining how the testing infrastructure works. Submit the test as a pull request (see CONTRIBUTING.md).The manual method
master
branch. Build julia withmake
/tmp/coverage_tests.jl
:(This is the list of tests currently in
test/runtests.jl
, with 3 omissions (resolve
,reflection
, andmeta
) and one addition (pkg
). The omitted tests explicitly test inlining, which we're going to disable, or are problematic when inlining is disabled.)rm usr/lib/julia/sys.so
. Deletingsys.so
will prevent julia from using any pre-compiled functions, increasing the accuracy of the results. (This also makes startup a bit slower, but that's fine for this test---and once you're done, simply typingmake
will cause this file to be rebuilt).test/
directory.julia --code-coverage=all --inline=no
. This turns on code-coverage and prevents inlining (inlining makes it difficult to accurately assess whether a function has been tested).include("/tmp/coverage_tests.jl")
.base/
directory.0
in front of them (indicating that they were run 0 times), or have-
in front of them (indicating that they were never compiled).test/runtests.jl
. http://julia.readthedocs.org/en/latest/stdlib/test/ may be helpful in explaining how the testing infrastructure works. Submit the test as a pull request (see CONTRIBUTING.md).The text was updated successfully, but these errors were encountered: