[gecode-users] About the different performance of two "apparently similar" Gecode models

Ignacio Castiñeiras Pérez ncasti at fdi.ucm.es
Thu Jan 10 16:05:00 CET 2013


Hi Christian,
Awesome! I changed the variable domains to no longer being from MIN_INT to
MAX_INT and the difference in time and number of propagators just
dissapear.

Really, thank you very much for your reply!

Best regards,
Nacho


2013/1/9 Christian Schulte <cschulte at kth.se>

> Hi,****
>
> ** **
>
> Just two things completely off my head:****
>
> **-        **How does your search control recomputation? Gecode is quite
> clever, in particular during BAB.****
>
> **-        **I saw that your variable domains are always from MIN_INT to
> MAX_INT. Gecode’s aren’t. This makes quite some difference for Golomb.
> Check the MPG case study on Golomb.****
>
> ** **
>
> Best****
>
> Christian****
>
> ** **
>
> --****
>
> Christian Schulte, www.ict.kth.se/~cschulte/****
>
> ** **
>
> *From:* users-bounces at gecode.org [mailto:users-bounces at gecode.org] *On
> Behalf Of *Ignacio Castiñeiras
> *Sent:* Wednesday, January 09, 2013 6:13 PM
> *To:* users at gecode.org
> *Subject:* [gecode-users] About the different performance of two
> "apparently similar" Gecode models****
>
> ** **
>
> Hi all, ****
>
> We are obtaining a different performance in two “apparently similar”
> Gecode models, and after performing some test we are blocked in finding out
> where this different performance comes from. ****
>
> ** **
>
> *CONTEXT:*
>
> We are performing a modelling and solving comparison of the Golomb Ruler
> problem between Gecode and our Constraint Functional Logic Programming
> system TOY(FDg) (which uses Gecode for finite domain constraint solving).
> Our hypothesis is that, being Golomb an optimization combinatorial problem,
> as soon as it scales, the time spent on search will be almost the 100% of
> the total elapsed time (making the inherent surcharge of interfacing
> TOY(FDg) to Gecode nearly negligible). ****
>
> ** **
>
> Unfortunately (at least for us J) this does not hold. For example,
> whereas for Golomb-11 the Gocode model spends 28sec, the TOY(FDg) model
> spends 42sec (a 50% of time more). This difference directly comes from the
> time they spent on the search, which is strange, as both models are using
> the *same formulation* (in the sense of a same number of variables with
> same domain, a same set of constraints and a same search strategy). Of
> course the resulting Gecode code of our TOY(FDg) interfaced model results
> to be less natural than the one obtained when modelling directly over
> Gecode (in the sense of having to use a single variable vector and devoted
> predicates for adding each new variable and constraint arising in the
> TOY(FDg) goal, as some of its drawbacks). ****
>
> ** **
>
> To find out the reason of such this difference we have performed some
> tests, which I describe to you (all the files I talk about are attached to
> this email). We are using Gecode 3.7.3 with Visual Studio 2008.****
>
> ** **
>
> *TEST 1.*
>
> The test uses:****
>
> **-      **golomb_5_gist_natural.cpp (model using Gecode directly).****
>
> **-      **golomb_5_gist_natural.res (log of the results from running the
> model). ****
>
> **-      **golomb_5_gist_natural.pdf (gist tree obtained by running the
> model). ****
>
> ** **
>
> **-      **golomb_5_gist_interfaced.cpp (simulation of the resulting
> Gecode model obtained from TOY(FDg)).****
>
> **-      **golomb_5_gist_interfaced.res (log of the results from running
> the model).****
>
> **-      **golomb_5_gist_interfaced.pdf (gist tree obtained by running
> the model). ****
>
> ** **
>
> Both models solve Golomb-5, which contain: ****
>
> **-      **Variables: *m[0,m0,m1,m2,m3]*; *
> d[d0,d1,d2,d3,d4,d5,d6,d7,d8,d9]*. ****
>
> **-      **Constraints: *m[i]* < *m[i+1]*; *d* represents the pairwise
> differences *m*; all_different(*d*); *d[0]* < *d[last]*. ****
>
> **-      **Search: Label *m* in textual order, minimizing *m[last]*****
>
> ** **
>
> Conclusions: ****
>
>    1. The *pdf* files show that the Gist tree of the Gecode and TOY(FDg)
>    models are exactly the same. ****
>    2. The *res* files show that the propagation is the same in almost of
>    nodes, *but not in the node remarked in grey in the pdf files*. For
>    this node Gecode makes m = {0, 1, 6, *8,* 10} and d = {1, 6, *8,* 10, *[5..6],
>    [6..7],* 9, 2, 4, 3} (lines 73-76 of golomb_5_gist_natural.res), but
>    TOY(FDg) makes varsSearch = {1, 6, *7,* 10, 1, 6, *7,* 10, *5, 6,* 9,
>    2, 4, 3} (lines 60-62 of golomb_5_gist_interfaced.res). As it is seen in
>    the gist trees, both systems fail in this node, but TOY(FDg) has to do more
>    work to discover it). ****
>
> ** **
>
> *TEST 2.*
>
> The test uses:****
>
> **-      **golomb_5_bab_natural.cpp (model using Gecode directly).****
>
> **-      **golomb_5_bab_natural.res (log of the results from running the
> model). ****
>
> ** **
>
> **-      **golomb_5_bab_interfaced.cpp (simulation of the resulting
> Gecode model obtained from TOY(FDg)).****
>
> **-      **golomb_5_bab_interfaced.res (log of the results from running
> the model).****
>
> ** **
>
> Both models use a search engine (for obtaining statistics of the search)
> and the user-defined constraint “checkwhenpruned”, which prints in the log
> each modification in the domain of the variables. With that we can control
> how the domains of the variables evolve during the search. ****
>
> ** **
>
> Conclusions: ****
>
>    1. The res files show that both models explore 19 nodes, with 8 fails.
>    However, whereas the number of propagators in Gecode is 542, for TOY(FDg)
>    is 544 (possibly the two extra propagators of node remarked in grey in the
>    pdf files). ****
>    2. The res files show that there also two more kind of differences in
>    the propagation of Gecode and TOY(FDg). Lets focus in the lines 66-83 of
>    golomb_5_bab_natural.res and the lines 71-89 of
>    golomb_5_bab_interfaced.res. They represent the propagation performed in
>    the node remarked in yellow in the pdf files (initial state m = {0, 1,
>    [3..11], [6..13], [10..15]} and d = {1, [3..11], [6..13], [10..15],
>    [2..10], [5..12], [9..14], [2..10], [3..12], [2..9]} à final state m =
>    {0, 1, 3, [7..11], [11..15]} and d = {1, 3, [7..11], [11..15], 2, [6..10],
>    [10..14], [4..8], [8..12], [4..8]}, thus affecting to *all the
>    variables*): ****
>
> **2.1.  **The order in which the variables see their domain pruned in
> different in Gecode and TOY(FDg), which could come from a different order
> in which the constraints are being propagated. ****
>
> **2.2.  **The number of prunnings is different. For example, *in Gecode
> there are 18 variable notifications, but in TOY(FDg) there are 19 (the
> pruning d[5] = [6..12] appears in TOY(FDg) but not in Gecode*. As it is
> seen in the gist trees, both systems success in propagation of this node,
> but TOY(FDg) has to do more work to discover it).****
>
> ** **
>
> *TEST 3.*
>
> The test uses:****
>
> **-      **golomb_11_bab_natural.cpp (model using Gecode directly).****
>
> **-      **golomb_11_bab_natural.res (log of the results from running the
> model). ****
>
> ** **
>
> **-      **golomb_11_bab_interfaced.cpp (simulation of the resulting
> Gecode model obtained from TOY(FDg)).****
>
> **-      **golomb_11_bab_interfaced.res (log of the results from running
> the model).****
>
> ** **
>
> Both models use the computer clock to measure the elapsed time in running
> the problem.****
>
> ** **
>
> Conclusions: ****
>
>    1. Again, both models explore the same search tree (2968201 nodes and
>    1484086 fails). But, again, the number of propagators is different
>    (320150379 for Gecode and 295041321 for TOY(FDg)). Interestingly, for
>    Golomb-11 is Gecode the one that has (much) more propagators. For Golomb-5
>    it was TOY(FDg) the one that had (just) two more propagators). ****
>    2. Regarding the elapsed time for the search, Gecode (28sec) clearly
>    outperforms TOY(FDg) (42sec). ****
>
> ** **
>
> *WE NEED HELP:*
>
>    1. Do you think that the big performance difference of Test3 is
>    because of the reasons we have found in Test1 and Test2 (where TOY(FDg)
>    always has to work more than Gecode for obtaining the same propagation
>    results)? ****
>    2. If you think this is not the case, do you have any insight of where
>    does the performance difference come from? ****
>
> ** **
>
> Thank you very much in advance for spend your time reading this (so long)
> email and helping us. ****
>
> ** **
>
> Best regards, ****
>
> Nacho Castiñeiras ****
>
> ** **
>
> ** **
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gecode.org/pipermail/users/attachments/20130110/9b5dc68b/attachment-0001.html>


More information about the users mailing list