Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: lczexplore : an R package to explore Local Climate Zone classifications #5445

Closed
editorialbot opened this issue May 5, 2023 · 157 comments
Assignees
Labels
accepted published Papers published in JOSS R recommend-accept Papers recommended for acceptance in JOSS. review Track: 4 (SBCS) Social, Behavioral, and Cognitive Sciences

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented May 5, 2023

Submitting author: @MGousseff (Matthieu Gousseff)
Repository: https://github.com/orbisgis/lczexplore
Branch with paper.md (empty if default branch): master
Version: 0.0.1.0003
Editor: @martinfleis
Reviewers: @matthiasdemuzere, @wcjochem
Archive: 10.5281/zenodo.10041206

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/76fbd3ed47b537f386a892f0270ea5f4"><img src="https://joss.theoj.org/papers/76fbd3ed47b537f386a892f0270ea5f4/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/76fbd3ed47b537f386a892f0270ea5f4/status.svg)](https://joss.theoj.org/papers/76fbd3ed47b537f386a892f0270ea5f4)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@matthiasdemuzere & @wcjochem, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @martinfleis know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @wcjochem

📝 Checklist for @matthiasdemuzere

@editorialbot editorialbot added R review Track: 4 (SBCS) Social, Behavioral, and Cognitive Sciences waitlisted Submissions in the JOSS backlog due to reduced service mode. labels May 5, 2023
@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.88  T=0.35 s (107.8 files/s, 12201.7 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
R                               29            538            794           1804
Markdown                         3            164              0            413
Rmd                              2            160            197            104
TeX                              1              9              0             91
YAML                             1              2              4             19
JSON                             2              0              0              2
-------------------------------------------------------------------------------
SUM:                            38            873            995           2433
-------------------------------------------------------------------------------


gitinspector failed to run statistical information for the repository

@editorialbot
Copy link
Collaborator Author

Wordcount for paper.md is 2359

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1016/j.eiar.2015.10.004 is OK
- 10.1080/17512549.2015.1043643 is OK
- 10.1175/bams-d-11-00019.1 is OK
- 10.1016/j.buildenv.2021.107791 is OK
- 10.3390/land11050747 is OK
- 10.1111/j.1467-8306.1965.tb00529.x is OK

MISSING DOIs

- 10.1016/0013-9351(72)90023-0 may be a valid DOI for title: Some effects of the urban structure on heat mortality
- 10.21105/joss.03541 may be a valid DOI for title: GeoClimate: a Geospatial processing toolbox for environmental and climate studies
- 10.1016/j.landurbplan.2017.08.009 may be a valid DOI for title: Evaluating urban heat island in the critical local climate zones of an Indian city

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@martinfleis
Copy link

👋🏼 @MGousseff, @matthiasdemuzere, @wcjochem this is the review thread for the paper. All of our communications will happen here from now on.

All reviewers should create checklists with the JOSS requirements using the command @editorialbot generate my checklist. As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.

The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues (and small pull requests if needed) on the software repository. When doing so, please mention https://github.com/openjournals/joss-reviews/issues/5445 so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.

We aim for reviews to be completed within about 2-4 weeks, feel free to start whenever it works for you. Please let me know if any of you require significantly more time. We can also use editorialbot to set automatic reminders if you know you'll be away for a known period of time.

Please feel free to ping me (@martinfleis) if you have any questions/concerns.

Thanks!

@martinfleis
Copy link

@MGousseff please check the DOI suggestions above. If they are correct, include the DOIs in the paper. Thanks!

@martinfleis martinfleis removed the waitlisted Submissions in the JOSS backlog due to reduced service mode. label May 5, 2023
@MGousseff
Copy link

I'm sorry, I think the DOIs were in the bibtex, but flagged with url = in lieu of DOI =, I fix it right now and send the pull request asap.

@MGousseff
Copy link

Th DOIS have been fixed.

@martinfleis
Copy link

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1016/j.eiar.2015.10.004 is OK
- 10.1016/0013-9351(72)90023-0 is OK
- 10.1080/17512549.2015.1043643 is OK
- 10.1175/bams-d-11-00019.1 is OK
- 10.1016/j.buildenv.2021.107791 is OK
- 10.21105/joss.03541 is OK
- 10.3390/land11050747 is OK
- 10.1016/j.landurbplan.2017.08.009 is OK
- 10.1111/j.1467-8306.1965.tb00529.x is OK

MISSING DOIs

- None

INVALID DOIs

- None

@martinfleis
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@wcjochem
Copy link

wcjochem commented May 19, 2023

Review checklist for @wcjochem

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/orbisgis/lczexplore?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@MGousseff) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@matthiasdemuzere
Copy link

matthiasdemuzere commented May 22, 2023

Review checklist for @matthiasdemuzere

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/orbisgis/lczexplore?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@MGousseff) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@matthiasdemuzere
Copy link

matthiasdemuzere commented May 23, 2023

  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?

@MGousseff: the repo contains a license file in markdown, whilst this checklist asks for a plain-text file? I am new to these JOSS requirements, so I ma not sure if this is fine @martinfleis?

@MGousseff
Copy link

MGousseff commented May 23, 2023

Hello @matthiasdemuzere , thank you for getting involved. I followed the recommandations of Hadley Wickham in https://r-pkgs.org/ but for common licence, the licence file is even ignored (specified in the .Rbuildignor) as CRAN considers it as redudant with the licence specified in the DESCRIPTION file. So if a copy in full text format is needed, I think I can add it and add it's path in the .Rbuildignore, just let me know if it is needed.

@matthiasdemuzere
Copy link

  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

The lczexplore R package presents a tool “to compare different LCZ classifications”, or, more general, “any type of classifications on geographical units”.

Personally, I am not convinced that the current state of the package represents a substantial scholarly effort. I assume the code has been developed in the context of Bernard et al. (2023), a paper that is currently under review? Serving a specific purpose within this paper? Data from this paper is therefore also used as a sample dataset within this package.

But, I am not convinced this package will be useful for a broader audience, who might have very different use cases or types of spatial classifications? Other reasons for this concern::

Need: I’ve been working with LCZ maps for many years now. Yet I am not convinced the community is really interested in a tool that can compare two LCZ maps? Once you know about the difference / agreements between two maps, what then? How will you use this information? Without a proper reference (ground truth), I don’t really see what are the next steps one can do to put this to use? Bottom line: I am not convinced by the statement of need.

State of the field: I am a bit surprised by the shallow description here? The interest in LCZs is immense, yet this is not reflected here? Eg. the LCZ generator (Demuzere et al., 2021) that has received 5000+ LCZ submission in the past two years, the global LCZ map (Demuzere et al., 2022), or the many LCZ review papers, with Huang et al. (2023) likely being the most recent and comprehensive one?

Representation: From a LCZ content point of view, I wonder whether it really makes sense to compare maps that are developed from earth observation information and VGI/GIS layers? Typically their spatial units are very different, one being more representative for the coarser neighborhood-scale (as intended by Stewart and Oke (2012)), and one more on the block level? Please note that this comment might be more about semantics, and not a key reason for me to doubt the applicability of the package as such.

Coding standards / tests: the code seems relatively young, with most commits in the past 2 months only. As far as I can see, it has also not been tested outside the scope of the Bernard et al. (2023) paper, using e.g. different types of data? Even testing with raster LCZ maps from the most widely used sources such as the LCZ Generator or other continental or global maps seems limited?
In any case, it does not seem very straightforward to me to do so, as exemplified by the Main Functions description in the README.md: The following functions are the core of this package : importLCZgen : imports the LCZ layer from a GIS (tested with geojson and shapefile files) importLCZwudapt : imports LCZ from the wudapt Europe Tiff. You'll have to use importLCZgen first to create the Bounding box of your zone of interest showLCZ : plots the map of your LCZ compareLCZ : compares two LCZ classifications of the same areas, output plots and data if this comparison confidSensib : explores how the agreement between two LCZ varies according to a confidence indicator associated for the LCZ value of each geom (sensibility analysis). Somehow the functionality seems to be there, but rather inaccessible, and definitely not in a shape I would expect from a package? To add, I eg. also do not see any tests to verify the functionality of this software?

In summary, I support the fact that the authors want to make this code publicly available. Yet in its current state, it feels more like a personal R library serving a specific goal than a tool that will be sufficiently useful for a broader community and more general applications? As such, I don’t think it is ready to be published in JOSS @martinfleis

@MGousseff
Copy link

MGousseff commented May 31, 2023

Thank you @matthiasdemuzere for your review.

I will answer the points you have raised and propose some modifications.

About the Need:
This is my main concern about your review. The fact that defining "ground truth" in LCZ studies is difficult is precisely why one needs a tool to compare LCZ classifications. As you noticed, the tool was developed for the need of the Bernard et al paper analysis to compare vectorial dataset but it has been directly thought as a generic method to compare any LCZ classification maps (thus raster is included). Maybe this was not clear in the manuscript. This part will be detailed in the future paper version. For example a function is available to load and process the WUDAPT Europe tiff map.

Concerning the need of the tool, we are a bit surprise about the comments “Yet I am not convinced the community is really interested in a tool that can compare two LCZ maps” and “I am not convinced by the statement of need.” Indeed even if WUDAPT is the main standard method and endorsed by the community, there are others LCZ sources ( ground truth, GIS approaches, machine learning...). So there is a crucial need to objectively quantify differences obtained between two LCZ maps. In our opinion, LczExplore found a place there.
For instance in Bernard et al. (2023) manuscript, LczExplore was useful to compare OSM and BDTopo input data. We saw that the main discrepancies reflect the way building heights are taken into account but also the differences of input vegetation data in OSM and BDTopo. These are relevant informations that have been highlighted using the LczExplore package and it can be used the same way to compare any LCZ map (for example GeoClimate to WUDAPT) for a given territory.

Another useful feature of the package is how easy it makes grouping LCZ types into broader categories : if one wants to compare the urban envelopes of an area, it is very straightforward to group, for instance, all the urban LCZ, all the vegetation LCZ and so on, and to compare the resulting map, all in one go.

About the State of the field:
In our understanding, JOSS papers had to focus on similar tools, i.e. tools to compare LCZ classifications and not tools that produce LCZ classifications. This is why we did not describe any LCZ classification tool in details. We do not know any free and open source tool for comparing two LCZ maps while it seems interesting to offer such service. If you know more about such tool, please let us know.
The package is specifically designed for LCZ in the sense that when you specify that repr='standard' the colors on maps are standard LCZ map style colors. However, with repr='grouped' any qualitative variable pair of maps can be compared, as you can specify the names of the levels and the colors you want them to be associated with. A particular effort has been made to make this choice of grouping and colors robust for the user, with error messages quite explicit, so it may help geomaticians to compare several outputs for their treatments without any tedious repetitive work.

Representation:
In our opinion, the neighborhood-scale averages informations that might be quite heterogeneous at block scale. The way to compare WUDAPT to GeoClimate is not set yet since GeoClimate also allows to aggregate the LCZ available at block scale to the WUDAPT grid cell, keeping at this scale an information of the degree of heterogeneity found at block scale. In any case (the block information being aggregated at cell scale or not), LczExplore can still be used to investigate the differences between maps resulting from the GeoClimate or the WUDAPT approach in a near future.

About coding standards:
You are right about age of the tool, it has been developped for the need of Bernard et al. (2023) manuscript and has not been tested extensively since then. But it has been tested and shows interesting results when comparing WUDAPT to GeoClimate maps. The package has no warning and no error when one runs R CMD CHECK (only notes, which is allowed by CRAN standards). Each function has unitary tests (one can explore in the /inst/tinytest folder of the package) which are all documented and a vignette shows the most probable usecase.

Concerning the documentation and the readme informations:
We agree that some concretes examples must be added to facilitate the use of LczExplore. We will extend and describe it a bit more and specify how to sequence the main functions to produce the comparison of two maps. Concretely we propose to describe the way to obtain a comparison of two vector layer maps (such as in Bernard et al. (2023) manuscript plus also a new one describing how to obtain the comparison of European WUDAPT tiff LCZ map to a GeoClimate OSM output.

Whatever the choice of JOSS regarding this paper, we are looking forward to deepening the comparison of several approaches of LCZ generation, including, of course, the WUDAPT and GeoClimate approaches.

@matthiasdemuzere
Copy link

@MGousseff Thanks for providing additional information regarding my concerns.

Unfortunately I am still not convinced by the statement of need.

There are nowadays hundreds of papers using LCZs for a certain purpose. Yet, typically, people are not interested in comparing one LCZ map to another (typically even not in a proper accuracy assessment of one map). They just use the LCZ map they (or someone else) created (using whatever algorithm / input data) in any application of interest.

There are some that do an intercomparison, like the Muhammad et al (2022) paper you mention in your manuscript. In which they conclude that the GIS-LCZ based method shows a strong improvement over the previous WUDAPT L0 result, based on accuracy metrics (confusion matrix). We've checked this map in detail with local experts, and believe it is missing important features, eg. large LCZ 8 zones. So even though accuracy metrics can be higher, that does not necessarily mean that map is better.

Continuing with this example, I could use your lczexplore tool to visualize differences between the Muhammad et al. (2022) LCZ map and e.g. the LCZ map from Fenner et al. (2017). But then what? You mention the identification of how different input datasets treat vegetation, building heights, ... differently. Ok. Good. But again, what then?

So basically I am still looking for a more clear statement of need that outlines the potential of this tool to a broad community. How can it contribute to an improved understanding of the uncertainties, strengths and weakness of the LCZ framework / method(s)? Identifying differences between maps is one thing, but the crucial step is what comes after that: what to do with this information?

I hope I am not sounding to cruel here. I just genuinly would like to understand how this will benefit the community, in a broader sense and a context of continuesly advancing the field of LCZ-related work.

@matthiasdemuzere
Copy link

plus also a new one describing how to obtain the comparison of European WUDAPT tiff LCZ map to a GeoClimate OSM output.

This would be much appreciated. I started using the tool, but I stopped as it was not 100% clear to me how to do so with a raster file. There was some very high level info there, but in my opinion that required too much time investment. Given the selling position this is a tool, it should in my opinion be much more automated and self-explanatory?

On the side: how does the tool deal with different labels? LCZ labels can very widely, example for the natural classes 11-17 or A - G or 101-107, ...
More general: a large proportion of the LCZ Generator and W2W code is dedicated to checking inputs, to make sure they are in line with expected formats. Does lczexplore has something similar in place?

@oliviaguest
Copy link
Member

oliviaguest commented Nov 10, 2023

@MGousseff can you do one more pass? I see line 21 should not have brackets, for example. 😊

@oliviaguest
Copy link
Member

@martinfleis it seems the article is 4x the size of what we consider a typical JOSS paper: https://joss.readthedocs.io/en/latest/submitting.html. Is there a reason for this I am missing?

@MGousseff
Copy link

@MGousseff can you do one more pass? I see line 21 should not have brackets, for example. 😊

I'm willing to comply, but I think I failed to understand to understand the logic ! You mean there should be no bracket at all when citing ? I thought that we left brackets when we cite the article and no brackets when we cite the authors, but maybe I misunderstood ? If so, there are still lots of extra brackets !

About the length of the article, I let @martinfleis answer, but the paper got a little longer when one of the reviewer asked to justify the need for comparing maps.

@martinfleis
Copy link

@oliviaguest The paper had originally a borderline length (I tend to minimise asking authors to cut stuff) and got longer during the review. I am personally fine with that as is. In my interpretation, the guideline is there more to limit the load on reviewers than for any other reason, but happy to reconsider that position if you think it should be shorter.

@oliviaguest
Copy link
Member

@MGousseff can you do one more pass? I see line 21 should not have brackets, for example. 😊

I'm willing to comply, but I think I failed to understand to understand the logic ! You mean there should be no bracket at all when citing ? I thought that we left brackets when we cite the article and no brackets when we cite the authors, but maybe I misunderstood ? If so, there are still lots of extra brackets !

Do you mean you're confused about the difference between parenthetical versus in-text citations?

@oliviaguest
Copy link
Member

Here is a bit of an explanation: https://joss.readthedocs.io/en/latest/submitting.html#citations

@oliviaguest
Copy link
Member

@oliviaguest The paper had originally a borderline length (I tend to minimise asking authors to cut stuff) and got longer during the review. I am personally fine with that as is. In my interpretation, the guideline is there more to limit the load on reviewers than for any other reason, but happy to reconsider that position if you think it should be shorter.

I think it should have been shortened, but now it's past that point, and it's fine to publish as it is. Thanks for clarifying. ☺️

@martinfleis
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@MGousseff
Copy link

MGousseff commented Nov 13, 2023

I accepted the pull request. I'm very grateful, because the fixes I was about to do were with a wrong syntax (see comment on the PR). Thanks a lot. I guess we have to re ask 👍
@editorialbot generate pdf

@martinfleis
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@openjournals openjournals deleted a comment from editorialbot Nov 15, 2023
@oliviaguest
Copy link
Member

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1016/0304-3800(92)90003-W is OK
- 10.1002/joc.5447 is OK
- 10.1016/j.eiar.2015.10.004 is OK
- 10.1016/0013-9351(72)90023-0 is OK
- 10.1080/17512549.2015.1043643 is OK
- 10.1175/bams-d-11-00019.1 is OK
- 10.1016/j.buildenv.2021.107791 is OK
- 10.21105/joss.03541 is OK
- 10.3390/land11050747 is OK
- 10.1016/j.landurbplan.2017.08.009 is OK
- 10.1111/j.1467-8306.1965.tb00529.x is OK
- 10.1175/BAMS-D-16-0236.1 is OK
- 10.1177/001316446002000104 is OK
- 10.3389/fenvs.2021.637455 is OK
- 10.5194/egusphere-2023-371 is OK
- 10.5194/gmd-2021-428 is OK
- 10.1016/j.uclim.2018.01.008 is OK
- 10.32614/RJ-2018-009 is OK
- 10.5194/ems2022-83 is OK
- 10.1038/s41597-020-00605-z is OK

MISSING DOIs

- None

INVALID DOIs

- None

@oliviaguest
Copy link
Member

@editorialbot accept

@editorialbot
Copy link
Collaborator Author

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/sbcs-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#4779, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot
Copy link
Collaborator Author

Ensure proper citation by uploading a plain text CITATION.cff file to the default branch of your repository.

If using GitHub, a Cite this repository menu will appear in the About section, containing both APA and BibTeX formats. When exported to Zotero using a browser plugin, Zotero will automatically create an entry using the information contained in the .cff file.

You can copy the contents for your CITATION.cff file here:

CITATION.cff

cff-version: "1.2.0"
authors:
- family-names: Gousseff
  given-names: Matthieu
  orcid: "https://orcid.org/0000-0002-7106-2677"
- family-names: Bocher
  given-names: Erwan
  orcid: "https://orcid.org/0000-0002-4936-7079"
- family-names: Bernard
  given-names: Jérémy
  orcid: "https://orcid.org/0000-0001-7374-5722"
- family-names: Wiederhold
  given-names: Elisabeth Le Saux
  orcid: "https://orcid.org/0000-0002-2079-8633"
contact:
- family-names: Bernard
  given-names: Jérémy
  orcid: "https://orcid.org/0000-0001-7374-5722"
- family-names: Wiederhold
  given-names: Elisabeth Le Saux
  orcid: "https://orcid.org/0000-0002-2079-8633"
doi: 10.5281/zenodo.10041206
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: Gousseff
    given-names: Matthieu
    orcid: "https://orcid.org/0000-0002-7106-2677"
  - family-names: Bocher
    given-names: Erwan
    orcid: "https://orcid.org/0000-0002-4936-7079"
  - family-names: Bernard
    given-names: Jérémy
    orcid: "https://orcid.org/0000-0001-7374-5722"
  - family-names: Wiederhold
    given-names: Elisabeth Le Saux
    orcid: "https://orcid.org/0000-0002-2079-8633"
  date-published: 2023-11-15
  doi: 10.21105/joss.05445
  issn: 2475-9066
  issue: 91
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 5445
  title: "lczexplore: an R package to explore Local Climate Zone
    classifications"
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.05445"
  volume: 8
title: "lczexplore: an R package to explore Local Climate Zone
  classifications"

If the repository is not hosted on GitHub, a .cff file can still be uploaded to set your preferred citation. Users will be able to manually copy and paste the citation.

Find more information on .cff files here and here.

@editorialbot
Copy link
Collaborator Author

🐘🐘🐘 👉 Toot for this paper 👈 🐘🐘🐘

@editorialbot
Copy link
Collaborator Author

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.05445 joss-papers#4780
  2. Wait five minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.05445
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels Nov 15, 2023
@MGousseff
Copy link

I AM going to party like I just published a paper. 🤘

🙏🙏
I want to thank very warmly @wcjochem and @matthiasdemuzere for their thorough process of reviewing. It is thanks to you that the package is way better than what I submitted in the first place. @matthiasdemuzere I feel very grateful for the time you spent, especially for trying to reinstall the package and running the examples when you were not convinced it could be useful.

🙏🙏
@martinfleis I really appreciated how patient you were and how you guided me in the process. I hope I will be a bit "JOSS-smarter" next time. The last PR to help me correct bibtex and markdown syntax was a very classy move.

🙏🙏
@oliviaguest Thank you for your comprehension about article length.

🙏🙏
@ebocher Thank you for your trust, I'm so glad I have this chance to be part of this team (not just a polite b
@jeremy-b thanks for the support.

The package will be updated, as there already is a shiny interface in development to launch GeoClimate.

@oliviaguest
Copy link
Member

Huge thanks to the editor: @martinfleis; and the reviewers: @matthiasdemuzere, @wcjochem! ✨ JOSS appreciates your work and effort. ✨ Also, big congratulations to the author: @MGousseff! 🥳 🍾

@editorialbot
Copy link
Collaborator Author

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.05445/status.svg)](https://doi.org/10.21105/joss.05445)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.05445">
  <img src="https://joss.theoj.org/papers/10.21105/joss.05445/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.05445/status.svg
   :target: https://doi.org/10.21105/joss.05445

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS R recommend-accept Papers recommended for acceptance in JOSS. review Track: 4 (SBCS) Social, Behavioral, and Cognitive Sciences
Projects
None yet
Development

No branches or pull requests

7 participants