Ok, I split up my replies into a series of separate messages because I hope that this helps the discussion. I am looking forward to replies from you or others!
As far as I can tell at this moment, this will be the last reply in this series. Thank you very much for your patience!
bg> In general, none of our functions check validity beforehand. That is mainly because of performance reasons: a validity check is quite time-consuming, so it is left to the user to supply valid input, or check validity beforehand. So area, equals, difference, and all the functions you name, they are all not checking if the polygon is valid. At least not beforehand (sometimes an operation fails because of invalidity - then an exception can be thrown).
This is understood and makes perfect sense to me. I have no problem with the requirement to preprocess the input before invoking the actual algorithms, and I can see the benefits from that (preprocess once and then run a long and very efficient chain of algorithms, transforming canonical representations into canonical representations). Also, depending on your application domain, you may be able to eliminate part or all of the preprocessing to further improve efficiency. I have no issues with that at all.
bg> I understand your problem of course, because our valid-check is not yet perfect and also (seeing your tickets) some of the overlay operations seem still to result in invalid output (that should be fixed of course).
We have consensus that valid input should yield valid output (whatever "valid" means, see discussion about empty multi-polygon). The fact that there are currently some bugs that break this invariant isn't so bad as long as we agree on how it should be and that these cases must indeed be regarded as bugs. Fine with me, and thank you for all the efforts to contain those bugs!
The bigger problem in the large scheme of things is that I need the tools to do the preprocessing, because this is not trivial at all. remove_spikes, correct and unique go a long way, but it is not always exactly clear to me what their semantics are (see separate discussion about remove_spikes leaving a single point), and sometimes I do not have any way other than writing my own algorithm to analyze the geometry and, e.g., detect that it is actually representing the empty set. This is something that I hope can be delegated to boost::geometry in some future version. You partly answered this one already:
bg> That is a good question. Our dissolve algorithm (still in extensions) should make geometries valid. If they should remove spikes then - probably, that is not yet considered when it was designed. dissolve removes self-intersections from a potentially multi-self-intersecting geometry by trying to collect all clockwise (or in your case ccw) areas.
I don't think that you need one huge magical algorithm that just "does what I mean". It is ok and potentially useful to split up the job of "making a polygon valid" into a sequence of steps, allowing to compose those steps as required. In any case, some documentation would be helpful that explains, e.g., in which order these building blocks should be applied: correct() before unique()? correct() after remove_spikes()? etc.
In particular: Which invariants are enforced by correct? Which invariants are required by remove_spikes? Does every step guarantee all required input invariants for its output as well?
Volker Schöch | [hidden email]
Senior Software Engineer
We are looking for C++ Developers: http://www.think-cell.com/career
think-cell Software GmbH | Chausseestr. 8/E | 10115 Berlin | Germany
http://www.think-cell.com | phone +49 30 666473-10 | US phone +1 800 891 8091
Amtsgericht Berlin-Charlottenburg, HRB 85229 | European Union VAT Id DE813474306
Directors: Dr. Markus Hannebauer, Dr. Arno Schödl
Geometry mailing list
|Free forum by Nabble||Edit this page|