Difference between revisions of "Talk:Riven tBMP resources"
(that's me) |
|||
Line 2: | Line 2: | ||
[Tito] I think I've seen Aaron's effects in some compressed tBMPs. In particular, I recall being surprised by observing duplets repeating themselves, though I don't remember which tBMPs I was inspecting at the time. | [Tito] I think I've seen Aaron's effects in some compressed tBMPs. In particular, I recall being surprised by observing duplets repeating themselves, though I don't remember which tBMPs I was inspecting at the time. | ||
+ | |||
+ | [ [[User:Aaron|agrif]] 06:55, 23 February 2009 (CET) ] I mean to go back and check specifically for this, but I'm almost completely certain that this is true. Initially, in my algorithm, I used an unsigned integer to represent the number of pixels to look back from where I started, and decremented it each time I copied a pixel. This crippled all my images, and caused weird segfaults when it rolled over after 0. This was fixed when I used a signed integer type (ie let the command copy into newly-written pixels). This is basically an implementation detail, most people would do it a different way where this doesn't matter, or use a signed integer to begin with. [also, I think that pixel-based lookbacks ignore the extra space off the edge, while duplet-based ones do not, but I'm not certain on that. I'll verify that too, but it's too late now.] I didn't realize this was new information, I just thought it was left out. |
Revision as of 06:55, 23 February 2009
Aaron, I'm not entirely clear on your recent additions to the tBMP decode algorithms. Are you aware of specific images that exhibit the behaviors you describe to use a test cases, or a decoder implementation which respects those details?
[Tito] I think I've seen Aaron's effects in some compressed tBMPs. In particular, I recall being surprised by observing duplets repeating themselves, though I don't remember which tBMPs I was inspecting at the time.
[ agrif 06:55, 23 February 2009 (CET) ] I mean to go back and check specifically for this, but I'm almost completely certain that this is true. Initially, in my algorithm, I used an unsigned integer to represent the number of pixels to look back from where I started, and decremented it each time I copied a pixel. This crippled all my images, and caused weird segfaults when it rolled over after 0. This was fixed when I used a signed integer type (ie let the command copy into newly-written pixels). This is basically an implementation detail, most people would do it a different way where this doesn't matter, or use a signed integer to begin with. [also, I think that pixel-based lookbacks ignore the extra space off the edge, while duplet-based ones do not, but I'm not certain on that. I'll verify that too, but it's too late now.] I didn't realize this was new information, I just thought it was left out.