-
-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[REVIEW]: lczexplore : an R package to explore Local Climate Zone classifications #5445
Comments
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks. For a list of things I can do to help you, just type:
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
|
|
Wordcount for |
|
👋🏼 @MGousseff, @matthiasdemuzere, @wcjochem this is the review thread for the paper. All of our communications will happen here from now on. All reviewers should create checklists with the JOSS requirements using the command The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues (and small pull requests if needed) on the software repository. When doing so, please mention We aim for reviews to be completed within about 2-4 weeks, feel free to start whenever it works for you. Please let me know if any of you require significantly more time. We can also use Please feel free to ping me (@martinfleis) if you have any questions/concerns. Thanks! |
@MGousseff please check the DOI suggestions above. If they are correct, include the DOIs in the paper. Thanks! |
I'm sorry, I think the DOIs were in the bibtex, but flagged with url = in lieu of DOI =, I fix it right now and send the pull request asap. |
Th DOIS have been fixed. |
@editorialbot check references |
|
@editorialbot generate pdf |
Review checklist for @wcjochemConflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
|
Review checklist for @matthiasdemuzereConflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
|
@MGousseff: the repo contains a license file in markdown, whilst this checklist asks for a plain-text file? I am new to these JOSS requirements, so I ma not sure if this is fine @martinfleis? |
Hello @matthiasdemuzere , thank you for getting involved. I followed the recommandations of Hadley Wickham in https://r-pkgs.org/ but for common licence, the licence file is even ignored (specified in the .Rbuildignor) as CRAN considers it as redudant with the licence specified in the DESCRIPTION file. So if a copy in full text format is needed, I think I can add it and add it's path in the .Rbuildignore, just let me know if it is needed. |
The lczexplore R package presents a tool “ Personally, I am not convinced that the current state of the package represents a substantial scholarly effort. I assume the code has been developed in the context of Bernard et al. (2023), a paper that is currently under review? Serving a specific purpose within this paper? Data from this paper is therefore also used as a sample dataset within this package. But, I am not convinced this package will be useful for a broader audience, who might have very different use cases or types of spatial classifications? Other reasons for this concern:: Need: I’ve been working with LCZ maps for many years now. Yet I am not convinced the community is really interested in a tool that can compare two LCZ maps? Once you know about the difference / agreements between two maps, what then? How will you use this information? Without a proper reference (ground truth), I don’t really see what are the next steps one can do to put this to use? Bottom line: I am not convinced by the statement of need. State of the field: I am a bit surprised by the shallow description here? The interest in LCZs is immense, yet this is not reflected here? Eg. the LCZ generator (Demuzere et al., 2021) that has received 5000+ LCZ submission in the past two years, the global LCZ map (Demuzere et al., 2022), or the many LCZ review papers, with Huang et al. (2023) likely being the most recent and comprehensive one? Representation: From a LCZ content point of view, I wonder whether it really makes sense to compare maps that are developed from earth observation information and VGI/GIS layers? Typically their spatial units are very different, one being more representative for the coarser neighborhood-scale (as intended by Stewart and Oke (2012)), and one more on the block level? Please note that this comment might be more about semantics, and not a key reason for me to doubt the applicability of the package as such. Coding standards / tests: the code seems relatively young, with most commits in the past 2 months only. As far as I can see, it has also not been tested outside the scope of the Bernard et al. (2023) paper, using e.g. different types of data? Even testing with raster LCZ maps from the most widely used sources such as the LCZ Generator or other continental or global maps seems limited? In summary, I support the fact that the authors want to make this code publicly available. Yet in its current state, it feels more like a personal R library serving a specific goal than a tool that will be sufficiently useful for a broader community and more general applications? As such, I don’t think it is ready to be published in JOSS @martinfleis |
Thank you @matthiasdemuzere for your review. I will answer the points you have raised and propose some modifications. About the Need: Concerning the need of the tool, we are a bit surprise about the comments “Yet I am not convinced the community is really interested in a tool that can compare two LCZ maps” and “I am not convinced by the statement of need.” Indeed even if WUDAPT is the main standard method and endorsed by the community, there are others LCZ sources ( ground truth, GIS approaches, machine learning...). So there is a crucial need to objectively quantify differences obtained between two LCZ maps. In our opinion, LczExplore found a place there. Another useful feature of the package is how easy it makes grouping LCZ types into broader categories : if one wants to compare the urban envelopes of an area, it is very straightforward to group, for instance, all the urban LCZ, all the vegetation LCZ and so on, and to compare the resulting map, all in one go. About the State of the field: Representation: About coding standards: Concerning the documentation and the readme informations: Whatever the choice of JOSS regarding this paper, we are looking forward to deepening the comparison of several approaches of LCZ generation, including, of course, the WUDAPT and GeoClimate approaches. |
@MGousseff Thanks for providing additional information regarding my concerns. Unfortunately I am still not convinced by the statement of need. There are nowadays hundreds of papers using LCZs for a certain purpose. Yet, typically, people are not interested in comparing one LCZ map to another (typically even not in a proper accuracy assessment of one map). They just use the LCZ map they (or someone else) created (using whatever algorithm / input data) in any application of interest. There are some that do an intercomparison, like the Muhammad et al (2022) paper you mention in your manuscript. In which they conclude that the GIS-LCZ based method shows a strong improvement over the previous WUDAPT L0 result, based on accuracy metrics (confusion matrix). We've checked this map in detail with local experts, and believe it is missing important features, eg. large LCZ 8 zones. So even though accuracy metrics can be higher, that does not necessarily mean that map is better. Continuing with this example, I could use your So basically I am still looking for a more clear statement of need that outlines the potential of this tool to a broad community. How can it contribute to an improved understanding of the uncertainties, strengths and weakness of the LCZ framework / method(s)? Identifying differences between maps is one thing, but the crucial step is what comes after that: what to do with this information? I hope I am not sounding to cruel here. I just genuinly would like to understand how this will benefit the community, in a broader sense and a context of continuesly advancing the field of LCZ-related work. |
This would be much appreciated. I started using the tool, but I stopped as it was not 100% clear to me how to do so with a raster file. There was some very high level info there, but in my opinion that required too much time investment. Given the selling position this is a tool, it should in my opinion be much more automated and self-explanatory? On the side: how does the tool deal with different labels? LCZ labels can very widely, example for the natural classes 11-17 or A - G or 101-107, ... |
@MGousseff can you do one more pass? I see line 21 should not have brackets, for example. 😊 |
@martinfleis it seems the article is 4x the size of what we consider a typical JOSS paper: https://joss.readthedocs.io/en/latest/submitting.html. Is there a reason for this I am missing? |
I'm willing to comply, but I think I failed to understand to understand the logic ! You mean there should be no bracket at all when citing ? I thought that we left brackets when we cite the article and no brackets when we cite the authors, but maybe I misunderstood ? If so, there are still lots of extra brackets ! About the length of the article, I let @martinfleis answer, but the paper got a little longer when one of the reviewer asked to justify the need for comparing maps. |
@oliviaguest The paper had originally a borderline length (I tend to minimise asking authors to cut stuff) and got longer during the review. I am personally fine with that as is. In my interpretation, the guideline is there more to limit the load on reviewers than for any other reason, but happy to reconsider that position if you think it should be shorter. |
Do you mean you're confused about the difference between parenthetical versus in-text citations? |
Here is a bit of an explanation: https://joss.readthedocs.io/en/latest/submitting.html#citations |
I think it should have been shortened, but now it's past that point, and it's fine to publish as it is. Thanks for clarifying. |
@editorialbot generate pdf |
I accepted the pull request. I'm very grateful, because the fixes I was about to do were with a wrong syntax (see comment on the PR). Thanks a lot. I guess we have to re ask 👍 |
@editorialbot generate pdf |
@editorialbot recommend-accept |
|
|
@editorialbot accept |
|
👋 @openjournals/sbcs-eics, this paper is ready to be accepted and published. Check final proof 👉📄 Download article If the paper PDF and the deposit XML files look good in openjournals/joss-papers#4779, then you can now move forward with accepting the submission by compiling again with the command |
Ensure proper citation by uploading a plain text CITATION.cff file to the default branch of your repository. If using GitHub, a Cite this repository menu will appear in the About section, containing both APA and BibTeX formats. When exported to Zotero using a browser plugin, Zotero will automatically create an entry using the information contained in the .cff file. You can copy the contents for your CITATION.cff file here: CITATION.cff
If the repository is not hosted on GitHub, a .cff file can still be uploaded to set your preferred citation. Users will be able to manually copy and paste the citation. |
🐘🐘🐘 👉 Toot for this paper 👈 🐘🐘🐘 |
🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨 Here's what you must now do:
Any issues? Notify your editorial technical team... |
I AM going to party like I just published a paper. 🤘 🙏🙏 🙏🙏 🙏🙏 🙏🙏 The package will be updated, as there already is a shiny interface in development to launch GeoClimate. |
Huge thanks to the editor: @martinfleis; and the reviewers: @matthiasdemuzere, @wcjochem! ✨ JOSS appreciates your work and effort. ✨ Also, big congratulations to the author: @MGousseff! 🥳 🍾 |
🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉 If you would like to include a link to your paper from your README use the following code snippets:
This is how it will look in your documentation: We need your help! The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:
|
Submitting author: @MGousseff (Matthieu Gousseff)
Repository: https://github.com/orbisgis/lczexplore
Branch with paper.md (empty if default branch): master
Version: 0.0.1.0003
Editor: @martinfleis
Reviewers: @matthiasdemuzere, @wcjochem
Archive: 10.5281/zenodo.10041206
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@matthiasdemuzere & @wcjochem, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @martinfleis know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @wcjochem
📝 Checklist for @matthiasdemuzere
The text was updated successfully, but these errors were encountered: