Wednesday, April 20, 2011

When to override GetHashCode()?

When we should override the GetHashCode() method provided by 'Object' class in 'System' namespace.

From stackoverflow
  • When you override Equals, basically. When you want to provide a different idea of equality than simple reference equality.

    String is a good example of this - two strings are equal (under a simple Equals call) if they represent the same sequence of characters. The hash code reflects this, such that if two strings are equal they will have the same hash code. (The reverse isn't necessarily true - two unequal strings can have the same hash code, but it's unlikely.)

    (Strings are tricky in other ways, mind you - there are lots of different ideas of equality based on culture and casing, but String.Equals just looks at the UTF-16 code points which make up the string, and compares them in the simplest conceivable fashion.)

    RSolberg : My coworker and I were just discussing this one today. Makes much more sense now. Thanks Jon.
    tush1r : thanks to John for such easy to understand description.
  • If you override Equals you must override GetHashCode as well.

  • "The GetHashCode method can be overridden by a derived type. Value types must override this method to provide a hash function that is appropriate for that type and to provide a useful distribution in a hash table. For best results, the hash code must be based on the value of an instance field or property instead of a static field or property.

    Objects used as a key in a Hashtable object must also override the GetHashCode method because those objects must generate their own hash code. If an object used as a key does not provide a useful implementation of GetHashCode, you can specify a hash code provider when the Hashtable object is constructed. Prior to the .NET Framework version 2.0, the hash code provider was based on the System.Collections..::.IHashCodeProvider interface. Starting with version 2.0, the hash code provider is based on the System.Collections..::.IEqualityComparer interface."

    http://msdn.microsoft.com/en-us/library/system.object.gethashcode.aspx

  • If your type should follow value semantics (comparing contents) instead of reference semantics (comparing object identity) , you should write you own override of instance object.Equals().

Mysql stored procedure : cant run from PHP code

I have below stored procedure to check the user name availability

DELIMITER $$;

DROP PROCEDURE IF EXISTS tv_check_email$$

CREATE PROCEDURE tv_check_email (IN username varchar(50)) BEGIN select USER_ID from tv_user_master where EMAIL=username; END$$

DELIMITER ;$$

when i run this from my mysql front end tool, it is working fine

call tv_check_email('shyju@techies.com')

But when trying to execute from the PHP page, i am getting an error like "PROCEDURE mydatabase.tv_check_email can't return a result set in the given context"

Can any one tell me why it is so ?

I am sure that my PHP version is 5.2.6

Thanks in advance

From stackoverflow
  • You need to bind your result into an OUT parameter.

    See the mysql docs on stored procedures

    mysql> delimiter //
    
    mysql> CREATE PROCEDURE simpleproc (OUT param1 INT)
        -> BEGIN
        ->   SELECT COUNT(*) INTO param1 FROM t;
        -> END;
        -> //
    Query OK, 0 rows affected (0.00 sec)
    
    mysql> delimiter ;
    
    mysql> CALL simpleproc(@a);
    Query OK, 0 rows affected (0.00 sec)
    
    mysql> SELECT @a;
    +------+
    | @a   |
    +------+
    | 3    |
    

    +------+

    Shyju : thanks cody , it worked
  • It looks like if you use the mysqli PHP library you can actually retrieve your result set without having to use an OUT variable and another query to retrieve your value. This article covers the details:

    http://amountaintop.com/php-5-and-mysql-5-stored-procedures-error-and-solution-qcodo

  • Cody is not 100% right. You can bind your resulting return columns and return select data from within a stored procedure.

    $mysqli = new mysqli("localhost", "my_user", "my_password", "world");
    
    $stmt = $mysqli->prepare("call tv_check_email(?)");
    $stmt->bind_param('s', "shyju@techies.com");
    $stmt->execute();
    
    $stmt->bind_result($userid);
    
    while ($stmt->fetch()) {
      printf("User ID: %d\n", $userid);
    }
    
    $stmt->close();
    $mysqli->close();
    
    Cody Caughlan : Ah yes, much better. From my understanding, the std. mysql extension doesnt support accessing result sets w/o OUT, but the mysqli does (as your code indicates)? Good to know.

What are the technological limits to the usability of AJAX web apps?

I am trying to understand the technical limits to the usability of web-based productivity applications that use only open, cross-platform technologies such as Javascript, HTML, and CSS on the client. [1]

Let's assume for a moment that in the next few years the capabilities of web browsers continue to improve (e.g. with HTML 5 and faster JS engines), and significant progress is made in increasing bandwidth and reducing latency. What technological barriers (e.g. performance, graphics, modes of user interaction) will remain that limit the usability of web productivity apps when compared to conventional client-side applications? (Apart from offline access and issues that have significant non-technological aspects, such as privacy concerns.)

[1] By "productivity applications", I mean things like office suites, email, calendars, and diagramming programs.

From stackoverflow
  • Older browsers. There are still a lot of IE 6 users around. As the web becomes more AJAX-y, current browsers that just barely cut it are going to be more of a problem.

    Funka : I've never ran into any significant problems with Ajax on IE6, especially when using a framework for abstraction, such as jquery (Don't get me started on the CSS problems, though!)
  • and significant progress is made in increasing bandwidth and reducing latency.

    This IS the limitation, and latency is not something that is going to improve significantly in the future (there are real physical limits here). The roundtrip is the bottleneck.

    As for improvements, I see as javascript getting faster there being less AJAX and more client-side work. Right now, alot of AJAX is used to get display HTML form the server for rendering in the browser. In the future, AJAX will be used strictly for data, with javascript handing all the display.

    SO the barrier I see is javascript performance.

    Calvin : That really depends on where you live. If you live in Japan or South Korea where they're already rolling out 1Gbps symmetric FttH connections, then _maybe_ you can make that claim. But here in the U.S., the majority of the population still only has access to 13-year-old ADSL technology.
  • The real issue is that html+css does not provide 2d or 3d rendering primitives or any sort of real-time sound interface. Without those, a lot of the stuff we expect out of desktop apps aren't possible. I'm thinking of games, 2d/3d image and video editting, real time communications, that sort of thing. Obviously, you can do these things now, just not with open standards. With a little luck, more and more of the rich functionality available with Flash, Silverlight, and JavaFX will get pushed into "standards" and the barriers will be completely gone.

    I don't see any reason 99% of "productivity" apps couldn't run in a browser in a few years.

  • Basically with the flash virtual machine and javascript implementations in browsers improving, what you are seeing is convergence of traditional web functionality with typical client side application design. The primary difference is that the code for each page or snippet is downloaded and executed on demand and within a standardised environment across the various platforms out there. Essentially traditional web applications are becoming more like client applications. However there is still a need for web applications that don't operate like this. Today, you have the choice of either, or a combination of both.

Has anyone used Data Dynamics Reports?

We currently use ActiveReports (by Data Dynamics, now Grape City) for canned reports, but are considering moving up to their Reports package. If you've used it, I'd love to hear your take on:

  • Performance - do you feel it will scale well for a web based app (particularly compared with ActiveReports)
  • Export to Excel - it appears to provide a much cleaner export to Excel (ActiveReports' Excel export is awful, our biggest reason for considering a switch)
  • Other pros/cons (my company is pretty small, the $3,000 for 2 licenses is a lot for us)

Thanks for your feedback!

From stackoverflow
  • I've only used ActiveReports as well, but their web licensing model is a bit expensive in general in my view, espeically if you need to develop multiple apps on multiple servers. Then there is the per developer costs as well.

    I use DevXpress XtraReports and have been fairly happy with it so far and it has some fairly decent export functionality and a much better licensing model.

    Regarding export to Excel, I've not seen any reporting tool do it well, mainly due to the formatting issues with the report itself. What we typically do is provide the formatted report to the user, along with an additional link for an Excel export which is a similar but different query with the raw data the report uses.

    Another option over formatted printable reports is using grids such as Infragistics which allow you to do sorting, grouping, summaries, and which has excellent Excel export features.

  • Here are some additional information for you to consider about ActiveReports & Data Dynamics Reports:

    ActiveReports Licensing:

    There license is per developer. There are no royalties. You can write as many applications as you want and deploy your application to as many users or as many servers as you want without any additional costs. Read the ActiveReports License agreement here.

    Reporting to Excel:

    First of all, schooner is absolutely correct that all the other reporting tools have a poor scenario when exporting to excel. We recognized the same after many years of experience with ActiveReports. Frankly, it is a very hard problem to take reports designed to be paginated or deployed on the web and put them into a cell-based layout of a spreadsheet.

    However, with Data Dynamics Reports', we took a completely different approach. Instead of creating just another "export to Excel", where we look at "paginated" report output and try to fit it into a spreadsheet somehow, we generate the excel output based on two things: A template and the actual data in the report. By using a template, which is actually a specially formatted excel sheet (cells have some special place holders in them) the reporting engine can output the report's content to an excel sheet completely independent of how the report is laid out when paginated. We call this concept a "Transformation Extension" for Excel since it takes the report's content and transforms it to excel based on a template.

    By default DDReports will generate a default template that you will find more often than not has pretty good output. However, if the excel output is not what you want, you can instruct DDReports to save the template so you can customize the output in excel.

    The best way to get an introduction to this is to watch the screencast for the Excel Transformation Extension in Data Dynamics Reports here. Jump to about 1:20 in the screencast if you impatient and see an example of a simple template. Keep in mind this is a very simple template and the possibilities are much more sophisticated. Unfortunately, thus far we haven't published very good documentation on using the excel transformation extension template syntax yet, but let me know if you have questions and I'll help you out! Just comment on this post or send an email to our support team.

    Scott Willeke

    Data Dynamics / GrapeCity

    Jess : Hey Scott, thanks a ton for your reply! We definitely like the Excel output features of DDReports ... but will DDReports have similar high volume performance to ActiveReports? We'll be running a few hundred thousand reports per week off of a single server ... thanks!
    scott : Yes, DD Reports' is designed for high volume loads, especially on the server. However, the engine in DDR and AR are different so each engine will have different performance behaviors under different reports, but on the server I think you'll find DDRs performance good. If not, let me know.
  • We use both products and they are quite different from each other. I have been a long time user of Active Reports and have loved them. But when it came time to select a .net reporting tool we did not want to spend a bunch of $$ so we decised to get their DDR product. It took me a couple of weeks to get used to it as I kept trying to use it like Active Reports. Not a good idea. Anyways, once you get used to it it does a decent job. there are some things that they need to do to improve the product. Here are the things that stand out.

    1. You cannot access the control collection in the code area. This is a huge problem if you want to change anything like data binding inside the report.

    2. The database connection have to be refreshed if you repopen the report int he designer. This took a while to figure out and we wondered why our fields would not show up in the preview mode when re reloaded the report.

    3. Their new tech support is terrable. They were bought out recently and now when you call tech supprt you get someone tht has no knowledge that always tells you that someone will call you back. 80% of the time you get no call back. The otehr 20% of the time you get a sample emaild to you that has nothing to do with your issue. Now this is accorss the board with both products. THey used to have great tech support. I hope they fix this.

    Those are the main problems and I know they are workign to solve the issues. Like i said we use boh DDR and Active Reports. If you need to do complicated reports stick with Active Reports. If they are simple and you do not want to spend a lto fo money then DDR works fine. I see DDR getting better with each release but it will take a while to get the knks worked out.

    Just my opinion

  • This is to give more information to Bill's response in this thread. I tried to post a comment, but ran out of room :)

    Bill Thanks for your honest assessment. Let me give some comments for you from the inside on the issues you mentioned:

    1: Admittedly it is not quite as intuitive to access the controls collection as it was with AR, but you /can/ do it. You need to do it outside of the report (not in the script/code embedded into the report). To do it you can load the rdlx file in a ReportDefinition object. For example:

    var rpt = new DataDynamics.Reports.ReportDefinition(new FileInfo("myfile...rdlx"));
    var list = (DataDynamics.Reports.ReportObjectModel.List)rpt.Report.Body.ReportItems["myList"];
    var txt = (DataDynamics.Reports.ReportObjectModel.TextBox)list.ReportItems["myTextBox"];
    txt.Value = "=Fields!MyField.Value";
    

    However, depending on the scenario you're after there may be a better way to handle this than changing the binding on the control/reportItem itself. It is difficult to say more without knowing more about your particular scenario/goals.

    2: There was recently some discussion I was involved in on how to improve this in the very near future. The dev team was gathering use cases and doing some investigation on various caching strategies to keep hitting the database to an absolutely minimum in the designer. So look for improvements in this area in an upcoming build.

    3: Unfortunately, we're working through some challenges with our new technical support team. However, we are improving constantly and we're working hard to bring up the new guys as quickly as possible. If you have a problem with one of your incidents with support feel free to email me personally with your case number and I'll work to try get your case escalated or help out in any way I can (scott dot willeke at grapecity dot com).

    Thanks again for your feedback, my next letter is an internal one based on your feedback to help us improve!

    Scott Willeke
    Program Manager
    Data Dynamics / GrapeCity inc.
    

What is the best compression algorithm for small 4kb files?

I am trying to compress TCP packets each one of about 4kb in size. The packets can contain any byte (from 0 to 255). All of the benchmarks on compression algorithms that I found were based on larger files. I did not find anything that compares the compression ratio of different algorithms on small files, which is what I need. I need it to be open source so it can be implemented on C++, so no RAR for example. What algorithm would you recommend for small files of about 4 kilobytes in size? lzma? hacc? zip? gzip2?

From stackoverflow
  • All of those algorithms are reasonable to try. As you say, they aren't optimized for tiny files, but your next step is to simply try them. It will likely take only 10 minutes to test-compress some typical packets and see what sizes result. (Try different compress flags too). From the resulting files you can likely pick out which tool works best.

    The candidates you listed are all good first tries. You might also try bzip2.

    Sometimes simple "try them all" is a good solution when the tests are easy to do.. thinking too much sometimes slow you down.

    Blorgbeard : I agree, and ask that you post your results when you're done :)
  • For small packets biggest difference is achieved by Huffman-like distribution encodings since most used byte values automatically consume the least space. If you apply a dictionary based compression (LZ variants) on top of it you would have a very decent compression running.

  • I don't think the file size matters - if I remember correctly, the LZW in GIF resets its dictionary every 4K.

  • ZLIB should be fine. It is used in MCCP.

    However, if you really need good compression, I would do an analysis of common patterns and include a dictionary of them in the client, which can yield even higher levels of compression.

  • I did what Arno Setagaya suggested in his answer: made some sample tests and compared the results.

    The compression tests were done using 5 files, each of them 4096 bytes in size. Each byte inside of these 5 files was generated randomly.

    IMPORTANT: In real life, the data would not likely be all random, but would tend to have quiet a bit of repeating bytes. Thus in real life application the compression would tend to be a bit better then the following results.

    NOTE: Each of the 5 files was compressed by itself (i.e. not together with the other 4 files, which would result in better compression). In the following results I just use the sum of the size of the 5 files together for simplicity.

    I included RAR just for comparison reasons, even though it is not open source.

    Results: (from best to worst)

    LZOP: 20775 / 20480 * 100 = 101.44% of original size

    RAR : 20825 / 20480 * 100 = 101.68% of original size

    LZMA: 20827 / 20480 * 100 = 101.69% of original size

    ZIP : 21020 / 20480 * 100 = 102.64% of original size

    BZIP: 22899 / 20480 * 100 = 111.81% of original size

    Conclusion: To my surprise ALL of the tested algorithms produced a larger size then the originals!!! I guess they are only good for compressing larger files, or files that have a lot of repeating bytes (not random data like the above). Thus I will not be using any type of compression on my TCP packets. Maybe this information will be useful to others who consider compressing small pieces of data.

    EDIT: I forgot to mention that I used default options (flags) for each of the algorithms.

    kquinn : Your test is pretty worthless. Just about *any* compression algorithm will choke on random data -- in fact, compression ratio is a useful test for *how random* a chunk of data is -- if "compressing" enlarges data, it's probably high-entropy. Try again with real data and you might get useful results.
    Rick C. Petty : I agree that the test is worthless. Randomly-distributed data will not compress, in fact the basis of most compression algorithms is that the data is not random. Also, your comparison does not include zlib which only adds 5 bytes every 64k when STORE is used instead of DEFLATE.
    derobert : Compression is not magic. It works by observing repeating patterns. Random data has no repeating patterns, and will thus not compress. It can not, as 8^4096 > 8^4095.
  • I've had luck using zlib compression libraries directly and not using any file containers. ZIP, RAR, have overhead to store things like filenames. I've seen compression this way yield positive results (compression less than original size) for packets down to 200 bytes.

  • Here are some questions to ponder!!!

    Are you transmitting the dictionary within each packet?

    How big is the dictionary for each file?

    Is the data really random?

    I would suggest you analyze your data and build a static dictionary that the receiving 'knows' or at least can be updated occassionally.

    This will save considerable space and transmission time. There is no reason why this dictionary can't be huge compared to you packet size of 4K. ie 32MB.

    What's the best way to transfer the contents of the phone book from A to B? Tell B to pick his copy and use it!!!

  • Choose the algorithm that is the quickest, since you probably care about doing this in real time. Generally for smaller blocks of data, the algorithms compress about the same (give or take a few bytes) mostly because the algorithms need to transmit the dictionary or Huffman trees in addition to the payload.

    I highly recommend Deflate (used by zlib and Zip) for a number of reasons. The algorithm is quite fast, well tested, BSD licensed, and is the only compression required to be supported by Zip (as per the infozip Appnote). Aside from the basics, when it determines that the compression is larger than the decompressed size, there's a STORE mode which only adds 5 bytes for every block of data (max block is 64k bytes). Aside from the STORE mode, Deflate supports two different types of Huffman tables (or dictionaries): dynamic and fixed. A dynamic table means the Huffman tree is transmitted as part of the compressed data and is the most flexible (for varying types of nonrandom data). The advantage of a fixed table is that the table is known by all decoders and thus doesn't need to be contained in the compressed stream. The decompression (or Inflate) code is relatively easy. I've written both Java and Javascript versions based directly off of zlib and they perform rather well.

    The other compression algorithms mentioned have their merits. I prefer Deflate because of its runtime performance on both the compression step and particularly in decompression step.

    A point of clarification: Zip is not a compression type, it is a container. For doing packet compression, I would bypass Zip and just use the deflate/inflate APIs provided by zlib.

  • You can try delta compression(http://en.wikipedia.org/wiki/Delta_encoding). Compression will depend on your data. If you have any encapsulation on the payload, then you can compress the headers.

  • If you want to "compress TCP packets", you might consider using a RFC standard technique.

    • RFC2394 IP Payload Compression Using DEFLATE
    • RFC2395 IP Payload Compression Using LZS
    • RFC3173 IP Payload Compression Protocol (IPComp)
    • RFC3051 IP Payload Compression Using ITU-T V.44 Packet Method
    • RFC5172 Negotiation for IPv6 Datagram Compression Using IPv6 Control Protocol
    • RFC5112 The Presence-Specific Static Dictionary for Signaling Compression (Sigcomp)
    • RFC3284 The VCDIFF Generic Differencing and Compression Data Format
    • RFC2118 Microsoft Point-To-Point Compression (MPPC) Protocol

    There are probably other relevant RFCs I've overlooked.

troublesome javascript

I'm trying to make a little tag list doohickey for adding and removing tags. I have a textbox where the user can enter the tags, separated by commas. and an add button. I would like it for when the user clicks the add button to add a small div to the inside of a div below the box. the small div should contain the tag and a little x for which to remove the tag later. heres what I have:

<script type='text/javascript'>
  function tagsremove(tag) {
    document.getElementByName('tags').value.replace('/'+tag+'\,\s/', '');
  }

  $('#tagbutton').click(function(){
    var tags = $('#tagsbox').text().split(", ");
    for (var tag in tags) {
      document.getElementByName('tags').value += tag +", ";
      $('#curtags').append("<div class='tag'>" 
        + tag 
        + " <a href='#' onlclick='tagsremove(\'" 
        + tag 
        + "\');$(this).hide();'>x</a></div>")
    }
  });
</script>

<div class='statbox'>
  <form method='post' action='post.php' id='writeform'>
    <p class='subtitle'>Title</p>
    <input type='text' name='title' id='titlebox' /><br />
    <p class='subtitle'>Body</p>
    <textarea id='postbox' name='body' rows='10'></textarea><br />
    <p class='subtitle'>Tags</p>
    <input type='text' id='tagsbox' /><input type='button' id='tagbutton' 
      value='Add' />
    <p class='subsubtitle'>Seperate by commas 
      (eg. "programming, work, job")</p>
    <div class='subsubtitle' id='curtags'>Current Tags:</div>
    <input type='hidden' value='' name='tags' />
  </form>
</div>

The problem i'm having is that when I click the add button, nothing happens. I would like to fix this.

From stackoverflow
  • I am not familiar with using this method to call a function when a button is clicked

    $('#tagbutton').click(function(){
    

    I usually just put

    onClick='function()'
    

    inside the input tag. and declare the function as normal up in the script.

    Also, I think you should delimit tags with a single space. this is what people are used to. but if you do decide you want to be able to use multiple word tags, then delimit by "," not ", "

    Alconja : Looks like he's using jQuery.
    I.devries : Javascript shouldn't be in your HTML. Behaviour should be seperated from markup.
  • My guess is your script block that registers the click event is being executed before the dom is loaded, so the click event isn't actually being registered to a real element. Put your click event inside the document.ready event like this:

    $(function() {
        $('#tagbutton').click(function(){
            //etc...
        });
    });
    

    Also, (as an aside) why are mixing jQuery with regular javascript? It would probably be neater to change your hidden tags field to have an id of tags & do $('#tags').val(...) rather than document.getElementByName('tags').value = ...

    Charlino : Exactly, doesn't exist when the $('#tagbutton').click() is called.
  • Your first problem is that

    $('#tagsbox').text()
    

    should be

    $('#tagsbox').val()
    

    because #tagsbox is an input field.

    There are other issues, like splitting on "," and then trimming rather than splitting on ", " but I think your main problem is the .text() vs .val()

  • You have some issues in your code:

    1 ) document.getElementByName('tags')

    That such function doesn't exists, the function you're trying to use is getElementsByName (notice the 's'), but since you're using jQuery, you could use a selector like:

     var hiddenTags = $('input[name=tags]');
    

    2) You're using text(), instead val() as @Blair point's out

    3) In the foreach, you access the element indexes only, to access the actual element value, you have to do something like this:

    for (var i in tags) {
        var tag = tags[i];
    }
    

    There will be more work to do, but for start, check my corrections here.

  • First, as someone above mentioned, your Javascript code for the onclick event is being registered before the element is created on the page. Thus, it is not bound. To fix this, use, wrap your code in this dom ready function provided by jQuery.

    $(document).ready(function () {
        //put code here
    });
    

    Change this line var tags = $('#tagsbox').text().split(", ") to

    var tags = $('#tagsbox').attr('value').split(',')
    

    There is also a syntax error in your code since document.getElementByName is not a JS function. Perhaps assign it an id or a name attribute to target it.

    Next, once you get the hidden tags split into an array, perhaps traverse them this way to build them.

    $.each(tags, function(i, val) {
        $('#curtags').append("<div class='tag'>" 
        + val 
        + " <a href='#' onlclick='tagsremove(\'" 
        + tag 
        + "\');$(this).hide();'>x</a></div>")
    });
    

Fogbugz or BaseCamp which proj man. tool do you prefer

Which do you prefer Fogbugz or Basecamp (project management tools).

We are trying to decide which one to go ahead with so any likes and dislikes and things to watch out for would be appreciated.

From stackoverflow
  • I have only used Fogbugz for bug tracking as well as development tasks. I like creating creating case out of a bug, integration with perforce. The whole UI experience is very user freindly.

  • If you're dealing with a single outside client for the project use Basecamp. It's designed for facilitating that type of communication and it does so very well. But it's not really ideal for a software project. You can't assign items to people, there's no scheduling, etc. Using it to manage a software project would definitely be a classic square peg/round hole situation.

    If you're working on the project only internally with a team, FogBugz hands down. Evidence-based scheduling is brilliant. It has proper task tracking (priorities, due dates, user assignments, etc) and source control integration. It was built for managing a software project and it excels at it.

  • I haven't had any experience with Basecamp (have been looking at it, but no actual usage), but I can say that I really do like Fogbugz.

    What I like about it is its ease of use, no-nonsense approach to things, and its simplicity. Also, since you get the full documentation for the database backend (on MS SQL Server or MySQL), it's open for your own extensions.

    I give Fogbugz a big THUMBS UP

    Marc

Finalize vs Dispose

Why do some people use the Finalize method over the Dispose method?

In what situations would you use the Finalize method over the Dispose method and vice versa?

From stackoverflow
  • Finalize gets called by the GC when this object is no longer in use.

    Dispose is just a normal method which the user of this class can call to release any resources.

    If user forgot to call Dispose and if the class have Finalize implemented then GC will make sure it gets called.

  • The finalizer method is called when your object is garbage collected and you have no guarantee when this will happen (you can force it, but it will hurt performance).

    The Dispose method on the other hand is meant to be called by the code that created your class so that you can clean up and release any resources you have acquired (unmanaged data, database connections, file handles, etc) the moment the code is done with your object.

    The standard practice is to implement IDisposable and Dispose so that you can use your object in a using statment. Such as using(var foo = new MyObject()) { }. And in your finalizer, you call Dispose, just in case the calling code forgot to dispose of you.

    itowlson : You need to be a bit careful about calling Dispose from your Finalize implementation -- Dispose may also dispose managed resources, which you don't want to touch from your finalizer, as they may already have been finalized themselves.
    Samuel : @itowlson: Checking for null combined with the assumption that objects can be disposed of twice (with second call doing nothing) should be good enough.
    Brody : The standard IDisposal pattern and the hidden implementation of a Dispose(bool) to handle disposing managed components optional seems to cater for that issue.
    peterchen : Brody, I just wish this pattern would be easier to implement....
  • Finalize is the backstop method, called by the garbage collector when it reclaims an object. Dispose is the "deterministic cleanup" method, called by applications to release valuable native resources (window handles, database connections, etc.) when they are no longer needed, rather than leaving them held indefinitely until the GC gets round to the object.

    As the user of an object, you always use Dispose. Finalize is for the GC.

    As the implementer of a class, if you hold managed resources that ought to be disposed, you implement Dispose. If you hold native resources, you implement both Dispose and Finalize, and both call a common method that releases the native resources. These idioms are typically combined through a private Dispose(bool disposing) method, which Dispose calls with true, and Finalize calls with false. This method always frees native resources, then checks the disposing parameter, and if it is true it disposes managed resources and calls GC.SuppressFinalize.

  • 99% of the time, you should not have to worry about either. :) But, if your objects hold references to non-managed resources (window handles, file handles, for example), you need to provide a way for your managed object to release those resources. Finalize gives implicit control over releasing resources. It is called by the garbage collector. Dispose is a way to give explicit control over a release of resources and can be called directly.

    There is much much more to learn about the subject of Garbage Collection, but that's a start.

    Samuel : I'm pretty sure more than 1% of C# applications use databases: where you *have* to worry about IDisposable SQL stuff.
    Darren Clark : Also, you should implement IDisposable if you encapsulate IDisposables. Which probably covers the other 1%.
    JP Alioto : @Samuel: I don't see what databases has to do with it. If you are talking about closing connections, that's fine, but that's a different matter. You don't have to dispose objects to close connections in a timely manner.
    Brody : @JP: But the Using(...) pattern makes it so much simpler to cope with.
    JP Alioto : Agreed, but that's exactly the point. The using pattern hides the call to Dispose for you.
  • Others have already covered the difference between Dispose and finalize (btw the finalize method is still called a destructor in the language specification), so I'll just add a little about the scenarios where the finalize method comes in handy.

    Some types encapsulates disposable resources in a manner where it is easy to use and dispose of them in a single action. The general usage is often like this. Open, read or write, close (Dispose). It fits very well with the using construct.

    Others are a bit more difficult. WaitEventHandles for instances are not used like this as they are used to signal from one thread to another. The question then becomes who should call Dispose on these? As a safeguard types like these implement a finalize method, which makes sure resources are disposed when the instance is no longer referenced by the application.

  • Finalize

    • Finalizers should always be protected, not public or private so that the method cannot be called from the application's code directly and at the same time, it can make a call to the base.Finalize method
    • Finalizers should release unmanaged resources only.
    • The framework does not guarantee that a finalizer will execute at all on any given instance.
    • Never allocate memory in finalizers or call virtual methods from finalizers.
    • Avoid synchronization and raising unhandled exceptions in the finalizers.
    • The execution order of finalizers is non-deterministic—in other words, you can't rely on another object still being available within your finalizer.
    • Do not define finalizers on value types.
    • Don't create empty destructors. In other words, you should never explicitly define a destructor unless your class needs to clean up unmanaged resources—and if you do define one, it should do some work. If, later, you no longer need to clean up unmanaged resources in the destructor, remove it altogether.

    Dispose

    • Implement IDisposable on every type that has a finalizer
    • Ensure that an object is made unusable after making a call to the Dispose method. In other words, avoid using an object after the Dispose method has been called on it.
    • Call Dispose on all IDisposable types once you are done with them
    • Allow Dispose to be called multiple times without raising errors.
    • Suppress later calls to the finalizer from within the Dispose method using the GC.SuppressFinalize method
    • Avoid creating disposable value types
    • Avoid throwing exceptions from within Dispose methods

    Dispose/Finalized Pattern

    • Microsoft recommends that you implement both Dispose and Finalize when working with unmanaged resources. The Finalize implementation would run and the resources would still be released when the object is garbage collected even if a developer neglected to call the Dispose method explicitly.
    • Cleanup the unmanaged resources in the Finalize method and the managed ones in the Dispose method, when the Dispose/Finalize pattern has been used in your code.

    code example and additional resources are here: mgznet.com/disposedAndFinalized.aspx

Why use Events?

I'm understanding how events work in C# (am a fair newbie in this field). What I'm trying to understand is why we use events.

Do you know a well coded / architected app which uses events?

** rest of message snipped and put in as an answer**

From stackoverflow
  • You could always build your own way of sending/receiving events, subscribing/unsubscribing to event sources. But the language gives you a simple/standard way of doing it, so that is a good reason to use language "events" instead of your own events technique.

    Also, using the language "events" allows you to do all kinds of cool things using reflection because it is standardized.

    As to why using an event technique at all. There are all kinds of real-life examples where this is quite usefull and simpler to use events. Events are almost similar in their usefullness than Windows Messages are.

  • At the most basic conceptual level, Events are what let the computer react to what you do, rather than you being required to react to what the computer does. When you're sitting in front of your PC with several applications running (including the operating system), and several clickable objects available in each context for you to choose among, Events are what happens when you choose one and all the pieces involved can be properly notified.

    Even moving your mouse around kicks off a stream of events (to move the cursor, for instance).

  • You can implement the Observer Pattern in C# with Events and Delegates.

    Here is a link to an article that describes such: http://blogs.msdn.com/bashmohandes/archive/2007/03/10/observer-pattern-in-c-events-delegates.aspx


    alt text

  • *this used to be in the question body

    What would be very useful is a non trivial example of an app which uses events (guess it really helps testing too?)

    Thoughts so far are:

    Why use Events or publish / subscribe?

    Any number of classes can be notified when an event is raised.

    The subscribing classes do not need to know how the Metronome (see code below) works, and the Metronome does not need to know what they are going to do in response to the event

    The publisher and the subscribers are decoupled by the delegate. This is highly desirable as it makes for more flexible and robust code. The metronome can change how it detects time without breaking any of the subscribing classes. The subscribing classes can change how they respond to time changes without breaking the metronome. The two classes spin independently of one another, which makes for code that is easier to maintain.

    class Program
    {
        static void Main()
        {
         // setup the metronome and make sure the EventHandler delegate is ready
         Metronome metronome = new Metronome();
    
         // wires up the metronome_Tick method to the EventHandler delegate
         Listener listener = new Listener(metronome);
         ListenerB listenerB = new ListenerB(metronome);
         metronome.Go();
        }
    }
    
    public class Metronome
    {
        // a delegate
        // so every time Tick is called, the runtime calls another method
        // in this case Listener.metronome_Tick and ListenerB.metronome_Tick
        public event EventHandler Tick;
    
        // virtual so can override default behaviour in inherited classes easily
        protected virtual void OnTick(EventArgs e)
        {
         // null guard so if there are no listeners attached it wont throw an exception
         if (Tick != null)
          Tick(this, e);
        }
    
        public void Go()
        {
         while (true)
         {
          Thread.Sleep(2000);
          // because using EventHandler delegate, need to include the sending object and eventargs 
          // although we are not using them
          OnTick(EventArgs.Empty);
         }
        }
    }
    
    
    public class Listener
    {
        public Listener(Metronome metronome)
        {
         metronome.Tick += new EventHandler(metronome_Tick);
        }
    
        private void metronome_Tick(object sender, EventArgs e)
        {
         Console.WriteLine("Heard it");
        }
    }
    
    public class ListenerB
    {
        public ListenerB(Metronome metronome)
        {
         metronome.Tick += new EventHandler(metronome_Tick);
        }
    
        private void metronome_Tick(object sender, EventArgs e)
        {
         Console.WriteLine("ListenerB: Heard it");
        }
    }
    

    Full article I'm writing on my site: http://www.programgood.net/

    nb some of this text is from http://www.akadia.com/services/dotnet_delegates_and_events.html

    Cheers.

  • To provide a concrete normal world example....

    You have a form, the form has a listbox. There's a nice happy class for the listbox. When the user selects something from the listbox, you want to know, and modify other things on the form.

    Without events:

    You derive from the listbox, overriding things to make sure that your parent is the form you expect to be on. You override a ListSelected method or something, that manipulates other things on your parent form.

    With events: Your form listens for the event to indicate a user selected something, and manipulates other things on the form.

    The difference being that in the without events case you've created a single-purpose class, and also one that is tightly bound to the environment it expects to be in. In the with events case, the code that manipulates your form is localized into your form, and the listbox is just, well, a listbox.

Apache Cocoon JAR configuration - I want to use .class files !

Hey guys and gals

I'm using the Apache Cocoon framework, set up several eons ago for the web app I'm developing.

I don't know if its how cocoon is set up for everyone, or its if some 'special' configuration my company has performed, but this is what happens.

In order for cocoon to use ANY class files, they must be bundled up into a JAR and put in the tomcat(5) / common / lib directory. cocoon simply won't see the class files if we put them somewhere else.

Even if that somewhere else is in WEB-INF/classes or java or whatever.

Does anyone know how this configuration is set within cocoon (I'm a cocoon novice)? I want to be able to just bang my .class files in WEB-INF and away we go.

I know I should be using an IDE, but if you saw the app structure you would understand that I'm not. We're working towards it...

Many thanks in advance...

Mieze

From stackoverflow
  • I know nothing about Cocoon, but if you can put jars in tomcat/common/lib, then classes should work in tomcat/common/classes (NOT WEB-INF/classes).

    Placing stuff into tomcat/common (instead of inside the webapp itself) is kind of weird, but you probably need to change some Cocoon settings (or the place where Cocoon is installed) to avoid that. Is Cocoon part of the webapp or also "tomcat common"?

javascript image cropping problem

Im using this code to select the image area which needs to be cropped.

`function preview(img, selection) { var scaleX = 100 / selection.width; var scaleY = 100 / selection.height;

$('#thumbnail + > img').css({ width: Math.round(scaleX * 354) + 'px', height: Math.round(scaleY * 448) + 'px', marginLeft: '-' + Math.round(scaleX * selection.x1) + 'px', marginTop: '-' + Math.round(scaleY * selection.y1) + 'px' }); $('#x1').val(selection.x1); $('#y1').val(selection.y1); $('#x2').val(selection.x2); $('#y2').val(selection.y2); $('#w').val(selection.width); $('#h').val(selection.height); }

$(window).load(function () { $('#thumbnail').imgAreaSelect({ x1: 120, y1: 90, x2: 280, y2: 210, aspectRatio: '1:1', onSelectChange: preview });

}); `

This works fine but im using tabs to show different sections. When I click on the next tab I can see the image cropper which i dont want. How can i solve this. Im very new to javascript.

From stackoverflow
  • HI, I think you are using a div as the image cropper. If it is the case you can set the div attribute style.display='none' when you click on the other tab.

SQL Server 2008 Query is slow in production but fast in development

I have a query that runs in about 2-4 minutes on production but runs in a couple of seconds on development. Both of these databases are on the same exact server. (no lectures about dev and production, production is really still in development).

I mean, I can just open two query windows and get the two different results consistently. I have ran RedGate SQLCompare and there is no schema difference (indexes and so forth) difference. I have disabled the site that connects to the DB so there should be no connections other than my Management Studio session.

What could be causing this? I create the development database by copying the production one (in the Management Studio, right click database and click "Copy Database")

This is really strange. I don't want to make any index changes because the weird thing is that the copy is blazing fast but the production is very, very slow but should be essentially exact copies.

From stackoverflow
  • Try running SQL profiler to see whats running on production.

  • You don't provide any details of the DB structure or the SQL Query in question but if you are confident that the setup is the same for both environments then it may simply be the amound of data in your Production DB that is highlighting an in-efficient query.

  • I don't know SQLServer specifics, but usually this sort of thing is due to table statistics being different in the two databases. Look at the query plans to see if they are different. Run the SQLServer version of "analyze table" or "analyze schema" commands.

    If these things don't help, check how the databases are set up. Is it possible that the data is identical, but server configurations are different, and, for example, there is a much lower threshold on available memory for the production version?

    Something else to check -- and this is just me showing my ignorance -- but does "copy database" actually copy the data, or just the object definitions?

    Michael K Campbell : Yeah, statistics are the first place to look. You can run EXEC sp_updatestats as a sort of flame-thrower to see if it makes any initial improvement. Otherwise, SquareCog is right: look for configuration issues (particularly with drives/disk).
  • Red gate by default ignores statistics and things like fill factor.

    Mitch Wheat : what? would you care to elborate?
    gbn : Edit project, last tab (options?), see the "ignore" list. And the "Red Gate default" button.
  • Ok, thanks everyone. I think the problem was related to index fragmentation. I thought Copy Database basically just copied the files. I did a DBCC DBREINDEX on every table and it's working great now. Thanks everyone!

Getting SQL Server Execution Plan through SMO?

I know I can get the execution plan in Management Studio in XML format using "SET showplan_xml ON".

Is there a way to get the same information through .NET code?

From stackoverflow
  • SET is just a normal sql command, so if you can send an arbitray string to the server, send "set showplan on".

Split Function equivalent in tsql?

I'm looking to split '1,2,3,4,5,6,7,8,9,10,11,12,13,14,15...' (comma delimited) into a Table or Table Variable. Does anyone have a Function that returns each one in a row?

I am using SQL Server 2008.

Thanks.

From stackoverflow
  • Here is somewhat old-fashioned solution:

    /*
        Splits string into parts delimitered with specified character.
    */
    CREATE FUNCTION [dbo].[SDF_SplitString]
    (
        @sString nvarchar(2048),
        @cDelimiter nchar(1)
    )
    RETURNS @tParts TABLE ( part nvarchar(2048) )
    AS
    BEGIN
        if @sString is null return
        declare @iStart int,
          @iPos int
        if substring( @sString, 1, 1 ) = @cDelimiter 
        begin
         set @iStart = 2
         insert into @tParts
         values( null )
        end
        else 
         set @iStart = 1
        while 1=1
        begin
         set @iPos = charindex( @cDelimiter, @sString, @iStart )
         if @iPos = 0
          set @iPos = len( @sString )+1
         if @iPos - @iStart > 0   
          insert into @tParts
          values ( substring( @sString, @iStart, @iPos-@iStart ))
         else
          insert into @tParts
          values( null )
         set @iStart = @iPos+1
         if @iStart > len( @sString ) 
          break
        end
        RETURN
    
    END
    

    In SQL Server 2008 you can achieve the same with .NET code. Maybe it would work faster, but definitely this approach is easier to manage.

    Sung Meister : wow, why would anyone mark down this answer without explanation?
    XOR : Thanks, I would also like to know. Is there an error here? I wrote this code perhaps 6 years ago and it was working OK since when.
  • http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=50648

    A selection of different methods

  • Erland Sommarskog has maintained the authoritative answer to this question for the last 12 years: http://www.sommarskog.se/arrays-in-sql.html

    It's not worth reproducing all of the options here on StackOverflow, just visit his page and you will learn all you ever wanted to know.

  • Hi, I am tempted to squeeze in my favourite solution. The resulting table will consist of 2 columns: PosIdx for position of the found integer; and Value in integer.

    
    create function FnSplitToTableInt
    (
        @param nvarchar(4000)
    )
    returns table as
    return
        with Numbers(Number) as 
        (
         select 1 
         union all 
         select Number + 1 from Numbers where Number < 4000
        ),
        Found as
        (
         select 
          Number as PosIdx,
          convert(int, ltrim(rtrim(convert(nvarchar(4000), 
           substring(@param, Number, 
           charindex(N',' collate Latin1_General_BIN, 
           @param + N',', Number) - Number))))) as Value
         from   
          Numbers 
         where  
          Number <= len(@param)
         and substring(N',' + @param, Number, 1) = N',' collate Latin1_General_BIN
        )
        select 
         PosIdx, 
         case when isnumeric(Value) = 1 
          then convert(int, Value) 
          else convert(int, null) end as Value 
        from 
         Found
    

    It works by using recursive CTE as the list of positions, from 1 to 100 by default. If you need to work with string longer than 100, simply call this function using 'option (maxrecursion 4000)' like the following:

    
    select * from FnSplitToTableInt
    (
        '9, 8, 7, 6, 5, 4, 3, 2, 1, 0, ' + 
        '9, 8, 7, 6, 5, 4, 3, 2, 1, 0, ' +
        '9, 8, 7, 6, 5, 4, 3, 2, 1, 0, ' +
        '9, 8, 7, 6, 5, 4, 3, 2, 1, 0, ' +
        '9, 8, 7, 6, 5, 4, 3, 2, 1, 0'
    ) 
    option (maxrecursion 4000)
    
  • Try this

    DECLARE @xml xml,@str varchar(100),@delimiter varchar(10)
    SET @str= '1,2,3,4,5,6,7,8,9,10,11,12,13,14,15'
    SET @delimiter =','
    SET @xml = cast(('<X>'+replace(@str,@delimiter ,'</X><X>')+'</X>') as xml)
    SELECT C.value('.', 'varchar(10)') as value FROM @xml.nodes('X') as X(C)
    

    OR

    DECLARE @str varchar(100),@delimiter varchar(10)
    SET @str= '1,2,3,4,5,6,7,8,9,10,11,12,13,14,15'
    ;with cte as
    (
    select 0 a, 1 b
    union all
    select b, charindex(',', @str, b) + len(',')
    from cte
    where b > a
    )
    select substring(@str,a,
    case when b > len(',') then b-a-len(',') else len(@str) - a + 1 end) value      
    from cte where a >0
    

    Many more ways of doing the same is here How to split comma delimited string?

How to verify if a webpage is completely loaded using javascript

Hi,

I need to disable an image which I am using in <a "href"> until the page completely loads.

I cannot use document.ready() because I need to disable the the BEFORE the document is ready.

Can someone please help me with this?

Regards, Gnanesh

From stackoverflow
  • Define it in your HTML as disabled:

    <button disabled="disabled">My Button</button>
    

    And then on page load re-enable it.

    This has the downside of breaking functionality for users without Javascript. The other way to do it is to add a small line of code directly after the button:

    <button id="myButton">My Button</button>
    
    <script type="text/javascript">
        document.getElementById('myButton').disabled = true;
    </script>
    

    ...and then re-enable on document.load()

    Edit with new info:
    Is it an input with type "image"? If so, the above will still work. If not, and it's an <a> tag with an image inside it, I wouldn't recommend doing what the accepted answer suggests, sorry. Having an image suddenly appear at the end could get quite frustrating or distracting, considering that the page takes so long to load that you need to disable the link. What I'd suggest instead is this:

    <a href="whatever" onclick="return myLinkHandler();"><img src="..." /></a>
    <script type="text/javascript">
        var myLinkHandler = function() {
            alert('Page not loaded.');  // or something nicer
            return false;
        };
    </script>
    

    and then in your document.ready function, change the definition of the function:

    myLinkHandler = function() {
        alert("Yay, page loaded.");
        return true;
    };
    

    Alternatively, you could put a check inside the function to see if the page has loaded or not.

    var documentReady = false;
    function myLinkHandler() {
        alert (documentReady ? "Ready!" : "Not ready!");
        return documentReady;
    }
    
    document.onload = function () { // or use jQuery or whatever
        documentReady = true;
    };
    
    gnanesh : I am sorry for not providing the info correct. i need to disable an image which is acting as a button in my case
    gnanesh : Thanks a lot for your support
  • A full code sample using jQuery:

    <input type="button" value="My Button" id="mybutton" disabled="disabled" />
    <script type="text/javascript">
      $(document).ready(function() { $('#mybutton').removeAttr('disabled'); });
    </script>
    
    gnanesh : I am sorry for not providing the info correct. i need to disable an image which is acting as a button in my case
  • In the case of the image, just set the style to display:none in the <img> tag and then use the other suggestions to remove the CSS attribute Or change it:

    $(document).ready(function(){ $("#myImage").css("display","block"); });

    This way, the image won't even appear until the document is ready and then the user can click on it. If you need to go the extra mile, do as the other suggested and hide/disable the link also in the tag and use jQuery to show/enable it.

    gnanesh : Thanks for the solution :)

How do I handle date objects in ruby on rails forms?

I am using a rails helper that creates 3 select lists for the purpose of letting a user select a date. If I assign the helper a name, it sets all 3 select lists to have that name, and then only the last one goes through (it overwrites the previous). What I need to know is: What is the best way to take a set of 3 select lists (day, month, year), and combine them to create a date object and save it to my database?

Here is my date field:

<%= select_date Date.today, { :order => [:month, :day, :year], :start_year => Date.today.year, :end_year => Date.today.year + 3 } %>

and here is the parameters passed to my controller (spat out from my dev log)

Parameters: {"date"=>{"month"=>"4", "day"=>"9", "year"=>"2009"}, 
"action"=>"create", "controller"=>"shows", 
"show"=>{"headline"=>"for example, \"Parking lot is next to the bookstore on Main Street\"", "venue_id"=>{"venue_id"=>""}, "cents"=>"00", "dollars"=>"00"}, 
"user_id"=>"3", "load_in"=>"-2", "show_time"=>"-2", "query"=>"Type venue name & location here...", "doors"=>"-2"}

As you can see, the date is passed. Ideally, I would like the date passed as an acceptable date object to my controller. Furthermore, I would like it passed in the "show" hash as it is show data. Is this possible with some simple rails modifications? Or do I have to write some JS (...a bit dirty for this solution in my opinion)?

From stackoverflow
  • I think you want date_select instead of select_date, since you want it associated with a particular field in your show object.

      date_select("show", "date", :default => Date.today, 
        :order => [:month, :day, :year], :start_year => Date.today.year, 
        :end_year => Date.today.year + 3)
    

Migrating Database SQL Server to Sybase

Hey folks,

I'm migrating a database from MS-SQL to SyBase 15.2, please propose any tools which can assist me on this task.

also please post your experience with SyBase, especially the Replicator.

Many thanks !

From stackoverflow
  • Depending on the complexity of your database.

    I migrated the other way, from Sybase 11/12 to MS-SQL 2000 using DTS. I had to do some cleanup of both the schema and data.

    From memory I had to use bcp to recopy some data across. I believe the main culprits were "money" datatypes, where the values did not come across correctly.

    You can also use the SQL Server Import and Export utility. I don't know how well (if at all) that will copy indexes, constraints and the rest. Probably just copy table definitions and data.

    So DTS and bcp, which are part of SQL Server (well for SQL 2000 they were anyway), will probably do the job.

    There are 3rd party tools such as Erwin and DbArtisan, which I haven't used for 6-8 years, but I think they'll do the job. Not free.

    Possibly you could use replication to publish a snapshot and have Sybase subscribe to it.