-
Notifications
You must be signed in to change notification settings - Fork 460
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider integrating Tabula #340
Comments
Hi Daniel ! Yes, we (Vincent from CDS Strasbourg @Vi-dot and me) studied the integration of Tabula. We started with the idea to integrate ContentMine module for tables, but we failed to find a reliable way to convert PDF areas (the area identified as table by GROBID) into SVG. So we then explore tabula-java. Unfortunately looking at the code, we saw that it is completely interlaced with pdfbox for all internal data structures. It mean concretely either:
That's our current state of though about internal table structure parsing... |
Hi Patrice, It is great that you two already had a look at it. Regarding ContentMine, did you have success converting tables with it in general? It did seem a good candidate initially but may not be as actively developed (last time I tried on PKP's coaction dataset the heuristics didn't detect words correctly and there wasn't a table output - but I may have used it incorrectly). In my prototype processing, I was converting lxml (output of your pdftoxml) to SVG but I haven't tried to feed it into ContentMine. Just curious which difficulties you encountered? Detecting the correct table area? For the table integration, I guess it depends how much time you have to keep maintaining it going forward. It might be okay for it to be a bit slower in that case (could be worth to have a configuration to turn it on or off as not everyone will be interested in tables). It might also be good to measure whether GROBID or Tabula is better at detecting tables or the table areas. Some potential options to consider:
I guess that would make it easier to switch if it turns out another library becomes better at extracting tables. |
Hi @kermitt2, I've worked with the tabula branch, for better results, their Spreadsheet algorithm is more suitable for table extraction than the basic one: https://github.com/kermitt2/grobid/blob/tabula/grobid-core/src/main/java/org/grobid/core/data/Table.java#L154 I see that the most code of tabula powers table detection, not table extraction, that is already done by Grobid. Have you considered to extract tables from PDFAlto directly? With some additional checks |
Hi @Vitaliy-1 ! Nice integration and results indeed! Do we have some evaluation data for validating one algorithm against another? (as compared to the basic one integrated by @Vi-dot for instance) What would be the gain of adding the table extraction in pdfalto directly? There is no explicit ALTO structure for handling table and all the useful information should be anyway available in GROBID. I think the table "parsing" part covered by grobid would benefit a lot from more training data - there are really very few for the table model. The fulltext model is weak too in term of examples of tables, but before adding more training data to this model, I have planed to experiment with the new reading order of pdfalto and update the existing training data to this "new order". Thanks a lot for your contribution, I think it would be already a great addition to have this kind of results integrated in GROBID. |
Not a problem to process articles containing tables for the evaluation of these 2 tabula algorithms. I want also to experiment with detecting raws and cells from pdfalto. The gain here would be processing speed and opportunity not to rely on 3rd party software. I think adding information to the LayoutToken that is the last in The branch with tabula integration: https://github.com/Vitaliy-1/grobid/tree/core_tabula |
Hi @kermitt2, I've looked through PDFAlto output contained in LayoutToken and implemented an algorithm based on line brakes and token positioning. But after testing I've noticed that it may be not enough for some cases, e.g., when rows are not consecutively parsed by PDFAlto/Xpdf line by line. Tabula-lattice algorithm usually fails to recognize table in such cases. It's probably related to how PDF was created. So, if tabula would be not enough because of low accuracy, I think applying first a kind of sorting taking into account token's horizontal, vertical positioning and optimal distance between rows (+- some margin values) should at least partially solve the problem for problematic cases. |
Hello! Are you referring to end-of-line after each cell content? So, instead of having one cell content after each other for a row on the same line, we have one cell content per line? I've seen this case quite frequently and it's due to the pdf stream. However we could certainly correct that in pdfalto based on the fact the all these tokens have the same baseline. It's one of the improvement related to better handling reading order. Then it makes things easier to group tokens in GROBID - this would be less dependent of the particular output of pdfalto, and in case the ALTO file corresponding to the fixed layout comes from another tool (OCR or maybe in a future docx to ALTO converter), we can certainly apply in GROBID the same table structuring for all of them. |
Yes, what I've noticed is that end-of-line occurs after each cell or after each line in the cell content (if the cell content has several lines). Then we have another cell in a row parsed. This is good because it helps to determine cells and it's actually what I've used: https://github.com/Vitaliy-1/grobid/blob/core_cells/grobid-core/src/main/java/org/grobid/core/data/Table.java#L267 I have very rarely encountered situations when two cells are not delimited by an end-of-line symbol. This actually was one case with PDF created from DOCX. But more often, rather moving to the next cell in a row, the content of the next cell in a column is parsed. E.g., it's the case for PDF from this article: https://www.banglajol.info/index.php/JPharma/article/view/228
So, here the parsing is done per column basis, rather than row by row. It's where my current algorithm and (often) tabula-lattice fails. Regarding the former, I thought that prior sorting can solve the problem. Also, there can be situations when differences in cell's content vertical positioning (I mean the first line of a cell) can affect parsing order. But that's more or less manageable. Hope it makes sense :) |
Yes it's very clear and in line with what I observed. I will dive back this week-end in pdfalto to see how to refine the end-of-line event in these cases, and to exploit more spacial layout rather that pdf stream. |
Tabula seem to be actively developed and doesn't perform so badly (haven't done a quantitative analysis yet).
Would you consider integrating it?
The text was updated successfully, but these errors were encountered: