Friday, April 8, 2011

Using the All operator

I would like to see an example that makes the best use of the ALL operator when using a parent child reltaionship in LINQ. Can you show me one please?

From stackoverflow
  • If you want to get the parents along with whether all its childrens are active.

    from p in MyContext.Parents
    select new
    {
       p,
       ChildrensActive = p.Childrens.All(c=> c.IsActive)
    }
    
  • The All() extension method checks a predicate against all the items; for example, at execution:

    if(order.Lines.All(l=>l.IsClosed)) order.Close();
    

    (checks all lines are closed, and if so, closes the order)

    of in a query:

    var qry = from order in ctx.Orders
             where order.CustomerId = id
             select new {
                order.OrderId,
                IsShipped = order.Lines.All(l => l.IsShipped)
             };
    
    Viks : How would I handle a case where select all Customers having Order ..is this is a good candidate for 'All'?
    Marc Gravell : no; that would be Any - i.e. from cust in ctx.Customers where cust.Orders.Any() select cust;
  • Many LINQ examples here: http://msdn.microsoft.com/en-us/vcsharp/aa336746.aspx

  • IEnumerable<CD> goodCDs = CDs
      .Where(cd => cd.Songs.All(song => song.Rating > 6))
    

naming convention for public and private variable?

Is it wrong to use m_varname as public and the same class with _variable as private

From stackoverflow
  • Assuming that you're working with C++, the my answer is NO. It's perfectly reasonable, but you'll should really stick to that convention.

    However, statically typed languages such as C# assume such naming conventions to be somewhat redundant.

    Personally I think it's ugly, but it's not apparent where a variable comes from in C++ as such the sugaring might help.

  • Everyone has his/her own preferences as far as naming conventions are concerned. I'd say more people would agree on not having any public variables in a class.

  • Some concerns:

    • Why do you have public variables?

    • Identifiers starting with _ and __ are reserved for system libraries. In practice this doesn't matter very often, but it's nice to be aware.

    With those things said, there's nothing wrong with creating a naming convention, regardless of how it looks. Just be consistent.

  • The same goes for C++ and for Java: you do not need any hungarian notation nor any prefixes/suffixes. You got keyword "this"!

    class MyClass {
        private:
            int value;
    
        public:
            MyClass(int value) {
                this->value = value;
            }
    }
    

    Of course in this simple example you can (should!) use constructor initialization list ;)

    So, instead using any awkward notations just employ language's possibilities. When you know the name of your member variable - you know that it is perfect. Why would you obfuscate it with "_"?

    As for using the same names for public and private members: this absolutely wrong thinking! Why would one need two things to represent the same in the same class? Make it private, name it perfectly and give getters and setters public.

    Brian Neal : "this" is a pointer in C++.
    Marcin Gil : Right. Will correct!
  • You should not use names that begin with an underscore or contain a double underscore. Those names are reserved for the compiler and implementation. Besides that restriction, you can use any naming convention you and your team likes. Personally, I hate any form of "Hungarian" notation and dislike the m_something notation as well. It really bothers me that if I need to change the type of a variable I need to go update its name everywhere where it occurs. That's a maintenance headache.

What attribute should a good Unit-Test have?

A Unit-Test should

  • produce deterministic result
  • be independent
  • be valid
  • ...

What other characteristics should a test also have?

From stackoverflow
    • not access external resources
    • be readable
  • The Pragmatic Programmers' answer : good tests shall be A-TRIP

    • Automatic
    • Thorough
    • Repeatable
    • Independent
    • Professional
    • Automatable: no manual intervention should be required to run the tests (CI).
    • Complete: they must cover as much code they can (Code Coverage).
    • Reusable: no need to create tests that will only be executed once.
    • Independent: Independent execution of a test should not affect the performance of another.
    • Professional: tests should be considered with the same value as the code, the same professionalism, documentation, etc.
  • Ah. My favorite subject :-) Where to start...

    According to xUnit test patterns by Gerard Meszaros (THE book to read about unit testing)

    • Tests should reduce risk, not introduce it.
    • Tests should be easy to run.
    • Tests should be easy to maintain as the system evolves around them

    Some things to make this easier:

    • Tests should only fail because of one reason. (Tests should only test one thing, avoid multiple asserts for example.)
    • There should only be one test that fails for that reason. (this keeps your testbase maintainable)
    • Minimize test dependencies (no dependencies on databases, files, ui etc.)

    Other things to look at:

    Naming
    Have a descriptive name. Tests-names should read like specifications. If your names get too long you're probably testing too much.

    Structure
    Use AAA structure. This is the new fad for mocking frameworks, But I think it's a good way to structure all your tests like this.

    Arrange your context
    Act, do the things that need to be tested
    Assert, assert what you want to check

    I usually divide my tests in three blocks of code. Knowing this pattern makes tests more readable.

    Mocks vs. Stubs
    When using a mocking framework always try to use stubs and state based testing before resorting to mocking.

    Stubs are objects that stand in for dependencies of the object you're trying to test. You can program behaviour into them and they can get called in your tests. Mocks expand on that by letting you assert if they were called and how. Mocking is very powerfull but it lets you test implementation instead of pre and post-conditions of your code. This tends to make tests more brittle.

    mezoid : +1 the xUnit test patterns book is a must have IMO for anyone wanting to write good unit tests.
    Spoike : +1 but I have to say that book seems to be more about automatic testing rather than "unit testing" per se.
  • An other factors to keep in mind is the running time. If a test runs too long it is likely to be skipped.

    1. Must be fully automatic.
    2. Must not assume any preconditions (product X be installed, file and Y location etc).
    3. Must be person independent as far as running the scripts are concerned. Result can, however, be analysed by subject experts only.
    4. Must run on every beta build.
    5. Must produce a verifiable report.
  • A unit test should be fast: hundreds of test should be able to run in a few seconds.

  • One that I haven't seen anyone else mention is small. A unit test should test for one particular thing and that's all. I try to aim to have only one assert and minimize the amount of setup code by refactoring them out into their own methods. I'll also create my own custom asserts. A nice small unit test IMO is about 10 lines or less. When a test is small it is easy to get a quick understanding of what the test is trying to do. Large tests end up being unmaintainable in the long run.

    Of course, small isn't the only thing I aim for...its just one of the things I value in a unit test. :-)

    Spoike : I believe keeping tests small is a positive side effect from following good unit testing practices.
  • A test is not a unit test if:

    • It talks to the database
    • It communicates across the network
    • It touches the file system
    • It can't run at the same time as any of your other unit tests
    • You have to do special things to your environment (such as editing config files) to run it.

    Tests that do these things aren't bad. Often they are worth writing, and they can be written in a unit test harness. However, it is important to be able to separate them from true unit tests so that we can keep a set of tests that we can run fast whenever we make our changes.

    source: A Set of Unit Testing Rules

Anyone know what IDE settings Scott Guthrie is using?

I'm looking to modify my font's and colors to match what Scott Guthrie has. Has he posted this anywhere?

From stackoverflow

Force a Samba process to close a file

Is there a way to force a Samba process to close a given file without killing it?

Samba opens a process for each client connection, and sometimes I see it holds open files far longer than needed. Usually i just kill the process, and the (windows) client will reopen it the next time it access the share; but sometimes it's actively reading other file for a long time, and i'd like to just 'kill' one file, and not the whole connection.

edit: I've tried the 'net rpc file close ', but doesn't seem to work. Anybody knows why?

edit: this is the best mention i've found of something similar. It seems to be a problem on the win32 client, something that microsoft servers have a workaround for; but Samba doesn't. I wish the net rpc file close <fileid> command worked, I'll keep trying to find out why. I'm accepting LuckyLindy's answer, even if it didn't solve the problem, because it's the only useful procedure in this case.

From stackoverflow
  • If there isn't an explicit option in samba, that would be impossible to externally close an open file descriptor with standard unix interfaces.

  • This is probably answered here: http://stackoverflow.com/questions/323146/how-to-close-a-file-descriptor-from-another-process-in-unix-systems

    At a guess, 'net rpc file close' probably doesn't work because the interprocess communication telling Samba to close the file winds up not being looked at until the file you want to close is done being read.

  • Generally speaking, you can't meddle with a process file descriptors from the outside. Yet as root you can of course do that as you seen in that phrack article from 1997: http://www.phrack.org/issues.html?issue=51&id=5#article - I wouldn't recommend doing that on a production system though...

  • The better question in this case would be why? Why do you want to close a file early? What purpose does it ultimately have to close the file? What are you attempting to accomplish?

    Javier : it's very frequent that a windows client opens a file, and seems to have closed it; but the samba process keeps it open and other users can't erase it. the only way to release it is killing the process.
    Javier : IOW, it's not 'early' closing the file. it should be already closed long ago.
    X-Istence : My suggestion would be to file a bug report with the samba developers. Then go from there, there is no good way to close a file from outside of a process without possibly putting the process in an unstable state. You can always restart Samba periodically as a work around!
    Jess : X-Istence - this has been a "bug" with Samba for over 5 years, and it hasn't been addressed by them. We do the same thing as Javier and just kill the process.
    X-Istence : LuckyLindy: If this has been a bug in Samba for over 5 years, could you please link to me a bug report?
    Stefan Thyberg : Easier to write questions like this as comments.
  • This happens all the time on our systems, particularly when connecting to Samba from a Win98 machine. We follow these steps to solve it (which are probably similar to yours):

    • See which computer is using the file (i.e. lsof|grep -i <file_name>)
    • Try to open that file from the offending computer, or see if a process is hiding in task manager that we can close
    • If no luck, have the user exit any important network programs
    • Kill the user's Samba process from linux (i.e. kill -9 <pid>)

    I wish there was a better way!

    Javier : yep, that's exactly what i'm doing.
  • I am creating a new answer, since my first answer really just contained more questions, and really was not a whole lot of help.

    After doing a bit of searching, I have not been able to find any current open bugs for the latest version of Samba, please check out the Samba Bug Report website, and create a new bug. This is the simplest way to get someone to suggest ideas as to how to possibly fix it, and have developers look at the issue. LuckyLindy left a comment in my previous answer saying that this is the way it has been for 5 years now, well the project is Open Source the best way to fix something that is wrong by reporting it, and or providing patches.

    I have also found one mailing list entry: Samba Open files, they suggest adding posix locking=no to the configuration file, as long as you don't also have the files handed out over NFS not locking the file should be okay, that is if the file is being held is locked.

    If you wanted too, you could write a program that uses ptrace and attaches to the program, and it goes through and unlocks and closes all the files. However, be aware that this might possibly leave Samba in an unknown state, which can be more dangerous.

    The work around that I have already mentioned is to periodically restart samba as a work around. I know it is not a solution but it might work temporarily.

    Javier : the mailthread was interesting; unfortunately it seems that the problem isn't about using posix locks or not, it's about samba believing the file should be open, so other samba clients can't delete it.

What is the correct way to declare and use a FILE * pointer in C/C++?

What is the correct way to declare and use a FILE * pointer in C/C++? Should it be declared global or local? Can somebody show a good example?

From stackoverflow
  • It doesn't matter at all whether it's local or global. The scope of the file pointer has nothing to do with its use.

    It's usually a good idea to avoid global variables as much as possible.

    Here's a sample showing how to copy from input.txt to output.txt:

    #include <stdio.h>
    int main (void) {
        FILE *fin;
        FILE *fout;
        int c;
    
        fin = fopen ("input.txt", "r");
        if (fin != NULL) {
            fout = fopen ("output.txt", "w");
            if (fout != NULL) {
                c = fgetc (fin);
                while (c >= 0) {
                    fputc (c, fout);
                    c = fgetc (fin);
                }
                fclose (fout);
            } else {
                fprintf (stderr, "Cannot write to output.txt");
            }
            fclose (fin);
        } else {
            fprintf (stderr, "Cannot read from input.txt");
        }
        return 0;
    }
    
  • Here is the first hit on google for "file io in c"

    http://www.cs.bu.edu/teaching/c/file-io/intro/

    Here is the third hit from gamedev with more of a C++ slant

    http://www.gamedev.net/reference/articles/article1127.asp

    You declare the pointer in the scope that you need it.

  • It's just an ordinary pointer like any other.

    FILE *CreateLogFile() 
    {
        return fopen("logfile.txt","w"); // allocates a FILE object and returns a pointer to it
    }
    
    void UsefulFunction()
    {
       FILE *pLog = CreateLogFile(); // it's safe to return a pointer from a func
       int resultsOfWork = DoSomeWork();
       fprintf( pLog, "Work did %d\n", resultsOfWork );  // you can pass it to other functions
       fclose( pLog ); // just be sure to clean it up when you are done with fclose()
       pLog = NULL;    // and it's a good idea to overwrite the pointer afterwards
                       // so it's obvious you deleted what it points to
    }
    
  • int main(void)
    {
      char c;
      FILE *read;
      read = fopen("myfile", "r"); // opens "myfile" for reading
      if(read == NULL)
      {
        perror("Error: could not open \"myfile\" for reading.\n");
        exit(1);
      }
      c = fgetc(read);
      fclose(read);
      printf("The first character of myfile is %c.\n", c);
      return 0;
    }
    

    You're perfectly allowed to declare global filehandles if you like, just like any other variable, but it may not be recommended.

    This is the C way. C++ can use this, but I think there's a more C++ friendly way of doing it. As a note, I hate it when questions are marked C/C++, because C and C++ are not the same language and do not work the same. C++ has a lot of different ways to do things that C doesn't have, and they may be easier for you to do in the context of C++ but are not valid C. So while this will work for either language, it's not what you want if you predominantly use C++.

    EDIT: Added some error checking. Always use error checking in your code.

  • First, keep in mind that a file pointer (and the associated allocated structure) is based on the lower level open() read() write() calls. The associated file descriptor (obtained by fileno(file_pointer) is the least interesting thing, but something you might want to watch your scope with.

    If your going to declare a file pointer as global in a module, its usually a very good idea to keep it static (contained within that module / object file). Sometimes this is a little easier than storing it in a structure that is passed from function to function if you need to write something in a hurry.

    For instance, (bad)

    #include <stdio.h>
    #include ...
    
    #define MY_LOG_FILE "file.txt"
    
    FILE *logfile
    

    Better done as:

    #include <stdio.h>
    
    #define MY_LOG_FILE "file.txt"
    
    static FILE *logfile;
    
    int main(void)
    {
    

    UNLESS, you need several modules to have access to that pointer, in which case you're better off putting it in a structure that can be passed around.

    If its needed only in one module, consider declaring it in main() and letting other functions accept a file pointer as an argument. So, unless your functions within the module have so many arguments that another would be unbearable .. there's (usually) no reason to declare a file pointer globally.

    Some logging libraries do it, which I don't care for ... especially when dealing with re-entrant functions. Nevermind C's monolithic namespace :)

Flex: How to access properties of component in dynamic creation?

Hello everyone. I have a component which is created dynamically. I want to access the properties on it.

for example i create a vbox and i want to access the text font or gap of the component

var MyVBox: VBox = new VBox; MyPanel.addChild(MyVBox);

How should it be done?

From stackoverflow
  • The thing to remember when using ActionScript instead of MXML is that the style properties are not accessed as properties on the object but through the getStyle("propertyName") method. Font is a style for example.

    Jejad : i am new in ActionScript. I code delphi before and the code is very different. I started study flex 2 weeks from now. By the way how should it be done? Can u give me an example code? thank you for your quick response
  • All properties and methods are accessed with "." (dot) notation.

    Example:

    myVBox.width = 400;
    

    Styles are set using the setStyle() method. In your case that would be

    myVBox.setStyle("fontFamily", "arial");
    myVBox.setStyle("verticalGap", 20);
    

    Check the docs at http://livedocs.adobe.com/flex/3/langref/ for the available properties and styles of each component.

Can I use JNDI to access FIles/their content?

Can JNDI be used in a Java servlet to access filesystem on the local machine or a remote machine?

I am able to bind local directories/files with it, but not able to find a mechanism(if exists) to read/modify the file's contents.

The files are simple text files.

Please tell me if it is possible, and how?

From stackoverflow
  • JNDI is an API (i.e. an interface) - implementations may vary in what they allow you to do. I think that generally JNDI implemnattions are for resource allocation and discovery and not used for this purpose.

  • As its name implies, JNDI provides Java applications access to directory and naming services. It is targeted for retrieving resource names from the directory. Java applications typically use JNDI to look up JDBC data sources, mail sessions and for authenticating and authorizing users. I suppose you could store a custom object in the directory, but this is not the recommended way of things.

    For local files you could add a String resource with the name and path of the file, retrieve it with JNDI and read/write it with standard means.

    Mohit Nanda : Can 'adding a String resource with the name and path of the file' done for a remote filesystem?
    kgiannakakis : If this is a public url yes. You could use that and download the file to read it. You can't edit it however, unless the remote system allows something like FTP or HTTP PUT.

How to limit the number of major versions to maintain on a document library list definition?

I have 2 question here,

Scenario: I have created a document library with my own list definition, having the default document library settings. There are quite number of document libraries created using my definition. Now I wanna change the version settings to limit number of major versions to 5.

I came across "MajorVersionLimit" List element property of Schema file, unfortunately this is available only for Content Migration schema alone. It didn't work for me.

Question 1: Is there any simple mechanism where I can enable such settings across my site collection?

I know I can write piece of .NET code to change this. Question 2: If I do this, Will it break the list from its list definition?

Thanks in advance. ~Yuva

From stackoverflow
  • I don't know how you can set these values via the configuration file, but they are exposed on an SPList object. So the trick is to find a way to capture when a new list is created from your template, and then set the fields (SPList.MajorVersionLimit and SPList.appropriately) appropriately.

    I haven't tried this, but here's where I'd start.

    1. In the ListTemplate element configuration file, set the 'NewPage' property to a aspx page that should get displayed when the user clicks the 'create' button in the UI.
    2. In the code behind for that page, create the list manually (using the OM) and set the properties.

    There may be a better way, but I don't see it. Good luck.

  • The MajorVersionLimit argument does work with schema.xml. Just remember to check the Create a version each time you edit a file in this document library? option.

  • In my opinion the easiest way to modify existing webs is to write a small PowerShell script setting the spWeb.MajorVersionLimit, some thing like this:

    `#Set up connection

    $spSite = new-object Microsoft.SharePoint.SPSite($siteurl)

    for($i=0; $i -lt $spSite.AllWebs.Count;$i++)

    {

    $spWeb = $spSite.AllWebs[$i]
    for($j=0; $j -lt $spWeb.Lists.Count;$j++)
    {
     $aList = $spWeb.Lists[$j]
     if( $aList.Title -ceq "dokumenter") # case sensitive
     {
      if( $aList.EnableVersioning -eq $false)
      {
       $aList.EnableVersioning = $true
       $aList.MajorVersionLimit = 5;
       $aList.Update()
      }
     }
    }`
    
  • It's actually not true, you can set MajorVersionLimit through Scheme.xml (as List element attribute), it worked perfectly well for me . There is no support for these attributes through intellisense.

    The solution was described on Doug "bobthebuilder" McCusker blog ( http://suguk.org/blogs/sharepointhack/archive/2008/01/14/7809.aspx )

How much is it fair to assume about implementation when doing big-O calculations?

When programmers versed in a wide variety of languages are discussing the merits of an algorithm expressed in pseudocode, and the talk turns to efficiency, how much should be assumed about the performance of ultimate the language?

For example, when I see something like:

add x to the list of xs

I see an O(1) operation (cons), while someone else may see an O(n) array copy-to-extend or worse.

The harder you look, the more such cases there are, at least potentially. For example,

add x to the running total

may look O(1) to most of us (integer addition, right?) but it could plausibly be O(ln(max(x,total))) if the numbers were very large (BigNums). Or it could be O(x) if add were implemented as

b = a
while b < 0
    inc total
    dec b

and I'll bet there are even worse ways to do it.

So what's reasonable? Always assume the most efficient possible implementation (even though no extant language provides it) or...?


Edit:

To stir the pot some more here:

I note several people saying you should state your assumptions.

Fine, but suppose I assume (as I do) that advancing to the next line of code is free. Should I have to state that? Someone could, after all, be imagining some sort of mechanical implementation where going to the next line is a time consuming step. Until they bring it up, I might never have considered the posibility.

I'd contend for this reason that it isn't possible to state all your assumptions.

So my question is really "since there are some assumptions can you make without having to state them, and some you can't, where do you draw the line"?

From stackoverflow
  • While some caveats about implementation might be appropriate, I think if you're trying to compare the merits of algorithms as algorithms, the meat of it is in the optimal case. Once you know the theoretical boundaries of performance, there's another set of problems in trying to get your implementation to approximate that, but you're never going to exceed your theoretical bound.

    Jon Skeet : But you shouldn't assume that *every* operation will be optimal for the same type. For example, different lists give constant time insertion addition anywhere if you've got a reference to one node (linked list) *or* constant time index access (array list) but not both at the same time.
    Jon Skeet : (Continued) Other implementations will be balanced between the two (e.g. O(log n)), etc. If you assume both operations will be constant, you may miss an algorithm which is faster in *practical* big-O terms.
    chaos : Yeah. You can assume an implementation that's as good as possible, but no better. :)
    MarkusQ : But (to stir the pot) ought you assume that all instances of the same abstract "type" have the same concrete implementation? Is it reasonable to assume three collections, one which is fast to add to, one that is fast to index into, and one that is fast to search?
    chaos : Sure, why not? If your algorithm would benefit from that, then that's part of its performance boundaries.
    David Thornley : O(n) limits are usually for the worst case.
    MarkusQ : Worst case data in the best case implementation,
  • I think it's reasonable to assume a normal, but not best possible implementation, unless you are previously aware of such special scenarios as the one you construed. The difference between both should be constant. If you always assume for the worst like you showed in your example, I doubt that you could do any big-O calculations at all.

  • I think it's okay to make assumptions, so long as you state them where there's any cause for doubt. I would assume without statement that adding fixed-size integers is a constant time operation, but as there are so many wildly different implementations of lists (depending on what you need) it's worth stating the assumptions there. Likewise adding variable-size integers would be worth calling out explicitly - particularly because it may not be obvious when just reading through it relatively casually.

    That way when you consider the algorithm in a concrete setting, you can test the assumptions before starting the implementation. It's always nice if the API you're working with (collections etc) states the complexity of operations. Java and .NET are both getting quite good on this front, IIRC.

  • Interesting question.

    In my opinion, the steps involved in the algorithm in question should not be complex algorithms themselves. In otherwords, if "add x to the list of xs" is a complex operation, it needs to be fully specified, and not just a single sentence. If it is expressed as a single, straight forward sentence, then assume it is appropriately fast (eg. O(1), or O(log n) or similar depending on the apparent operation).

    Another case where this is clear is in string-building. Sure, these days we have the string-builder class, and understand the problem well. But it was only a few years ago when I saw an application brought to its knees because the developer coded a simple "append thousands of small strings together to make a big string", but each append did a full mem-copy

  • I believe the first step of doing big-O calculations is to decide which actions you consider elementary. In order to do that, you may stand by common point of view for standard actions (+,-,*,/ are elementary, so are insertions in lists and so forth).

    But in particular cases (those you mentionned, for instance), these actions are not elementary any more and you have to make new assumptions. You can make them using simple tests that you tried or, just out of reasoning. Any way what works for you is good !

  • The answer is to look at your use cases, and analyze in terms of them, rather than in the absolute.

    What kind of data are you going to be consuming in the real world? If you're going to be crunching longs, assume constant time addition. If you're going to be crunching lots of unbounded BigNums, that's not a good assumption.

    Does a list fit with the rest of your algorithm, or do you need lots of random access? O(n) array-to-copy is a worst case scenario, but is it legit to amortize your analysis and consider what happens if you double the array size each time (giving you an amortized O(1) cost)?

    Being able to answer these questions in terms of your own algorithms makes you a good algorithmatist. Being able to answer them in terms of algorithms you're considering for a project you're working on makes you a responsible implementer.

    If your algorithm is being designed purely in the abstract, then be straightforward and explicit in your assumptions. If you assume constant time addition, just mention it. If you're doing amortized analysis, be sure you mention that.

    MarkusQ : It was a discussion here that prompted the question, and I think we all thought we were being clear in our assumptions. But there comes a point when you really don't want to have to type "and I also assume that checking if N is odd is O(1)" for every operation.
    rampion : True enough. Lots of common assumptions can be derived from simple statements - if you mention constant data size, we can assume constant addition.
  • Assume fair, to decent performance. If no guarantees are provided in some documented form assume a little as possible but the it should be safe to expect that the basic data structures and their operations preform as taught by the computer science department.

    But what language you're running kind of matters.

    A good C programmer is able to compile the code in his head. Meaning, he is able to glance at the code and get a fairly good approximate of what that would look like in machine code.

    A Haskell programmer can't do that. It's the exact opposite. A Haskell programmer is going to care more about the complexity of his function and with good reason.

    However, the Haskell programmer will never write code that's going to out perform what the C programmer writes. It's a misunderstanding among many that this is the case. Programmers that know how to write fast code on modern hardware are superior in the performance segment. However, it would be fair to assume that the Haskell programmer will be more productive.

    There's a sizable overhead to any programming language and notable managed environments. This doesn't make them slow but it makes them slower. The less hardware specific, the slower execution.

    There's plenty of ways you can write code to eliminate performance bottle necks without modifying your original algorithm and a lot of this has to with how the hardware is expected to execute your code.

    What you assume about code and it's performance has to be based on measurements and profiling. It's simply not fair to disregard the hardware performance which the Big-O notation does, (I'll admit it's very theoretical but not that practical).

  • I don't think that it is helpful when discussing efficiency of algorithms to go in details of the different languages the can be used (or OS or hardware architecture or ...).

    More general it's always good to isolate the different aspects of a question, even if they influence each other.

    My experience is, that in most of the cases, when it comes to complex algorithms, the most efficient one will work well in most language implementations. The optimization in matters of the used language is a more detailed thing in most of the cases. When discussing the algorithm in pseudocode the details should express the what in the most understandable way.

    Your examples are such details, parts used in complex algorithms and can be discussed and adjusted during implementation in some language. When discussing those details, pseudocode can be used, but the syntax of the actual language(s) would do a better job than.

  • I usually assume best implemenation, and explicitly mention the assumed implementation if necessary (e.g. use a HashSet, or a BST). I would define the data structures being used, and not leave them ambiguous (i.e. I'd clearly state that this is an array or a linked list). This would give a clear view of the order of any internal operations.

    I also consider basic, elementary operations to be O(1), or otherwise I clearly state that. Adding two numbers is O(1), unless we know that our numbers will cause overflow for primitive data types. For example, when assessing a shortest path algorithm, you don't consider the addition of costs to be a heavy operation. In my opinion, we are assuming that all similar algorithms in one category will have the same problem if numbers are larger than a certain bound, and thus they will still be comparable.

    Note that we (should) use the big-O notation to compare algorithms, rather than calculate their running time. For this purpose, simple operations that need to be performed in all algorithms of a certain category (e.g. addition) can safely be assumed to have O(1). It's also acceptable to say that the cost of mathematical and logical operations is almost the same. We say a linear search is O(n), and a linear sum is also O(n). So, following this logic, all elementary operations can be thought of as O(1).

    As a final note, what if we write an algorithm that requires 128-bit, or even 256-bit integers? I believe we should not assume it will be implemented with a BigInteger, because in a few years, these data types will probably be mainstream. Think about the same idea for algorithms that were invented in the 1960's, and you'll see how the same logic applies.

  • I would also assume the best implementation. It usually doesn't matter what it really is unless it becomes a bottleneck, in which case you should be using a better data structure anyways.

  • In some applications, hashtables inserts/reads are considered O(1) as well. It depends on what your baseline is.

    For something even as 'simple' as sorting, if you're comparing sorts of large data that won't fit inside memory, it might be worth considering the access time for the hard disk as well. In this case, mergesort is much better than almost all of the other algorithms.

    In other cases, specialized hardware may make some algorithms perform much better, by allowing things to be done in constant time that that might take O(n), normally.

    Hosam Aly : Can you support your claim that mergesort is better for disc access? I have seen a presentation by Jon Bentley before, in which he showed that quicksort is better than mergesort when it comes to memory or disk access.
    FryGuy : Hey, this isn't Wikipedia :). And I didn't specifically mention it being faster than Quicksort. It may, or may not be. However, it will be much master than something like heapsort that requires a lot of random access, even though they are both O(n log n)
  • Not exactly an answer to the question but this question remind me how luckily somebody had already invented RAM (RANDOM Access Memory) The question will be much difficult without it.

    I think big-O notation is not an absolute property of the system. Any big-O statement need to be attached with the assumption on what kind of input is interested and expected to growth.

Passing data from a jquery ajax request to a wcf service fails deserialization?

I use the following code to call a wcf service. If i call a (test) method that takes no parameters, but returns a string it works fine. If i add a parameter to my method i get a wierd error:

{"ExceptionDetail":{"HelpLink":null,"InnerException":null,"Message":"The token '\"' was expected but found '''.","StackTrace":" at System.Xml.XmlExceptionHelper.ThrowXmlException(XmlDictionaryReader reader, String res, String arg1, String arg2, String arg3)\u000d\u000a at System.Xml.XmlExceptionHelper.ThrowTokenExpected(XmlDictionaryReader reader, String expected, Char found)\u000d\u000a at System.Runtime.Serialization.Json.XmlJsonReader.ParseStartElement()\u000d\u000a at System.Runtime.Serialization.Json.XmlJsonReader.Read()\u000d\u000a at System.ServiceModel.Dispatcher.DataContractJsonSerializerOperationFormatter.DeserializeBodyCore(XmlDictionaryReader reader, Object[] parameters, Boolean isRequest)\u000d\u000a at System.ServiceModel.Dispatcher.DataContractJsonSerializerOperationFormatter.DeserializeBody(XmlDictionaryReader reader, MessageVersion version, String action, MessageDescription messageDescription, Object[] parameters, Boolean isRequest)\u000d\u000a at System.ServiceModel.Dispatcher.OperationFormatter.DeserializeBodyContents(Message message, Object[] parameters, Boolean isRequest)\u000d\u000a at System.ServiceModel.Dispatcher.OperationFormatter.DeserializeRequest(Message message, Object[] parameters)\u000d\u000a at System.ServiceModel.Dispatcher.DemultiplexingDispatchMessageFormatter.DeserializeRequest(Message message, Object[] parameters)\u000d\u000a at System.ServiceModel.Dispatcher.UriTemplateDispatchFormatter.DeserializeRequest(Message message, Object[] parameters)\u000d\u000a at System.ServiceModel.Dispatcher.CompositeDispatchFormatter.DeserializeRequest(Message message, Object[] parameters)\u000d\u000a at System.ServiceModel.Dispatcher.DispatchOperationRuntime.DeserializeInputs(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage4(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage3(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage2(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage1(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet)","Type":"System.Xml.XmlException"},"ExceptionType":"System.Xml.XmlException","Message":"The token '\"' was expected but found '''.","StackTrace":" at System.Xml.XmlExceptionHelper.ThrowXmlException(XmlDictionaryReader reader, String res, String arg1, String arg2, String arg3)\u000d\u000a at System.Xml.XmlExceptionHelper.ThrowTokenExpected(XmlDictionaryReader reader, String expected, Char found)\u000d\u000a at System.Runtime.Serialization.Json.XmlJsonReader.ParseStartElement()\u000d\u000a at System.Runtime.Serialization.Json.XmlJsonReader.Read()\u000d\u000a at System.ServiceModel.Dispatcher.DataContractJsonSerializerOperationFormatter.DeserializeBodyCore(XmlDictionaryReader reader, Object[] parameters, Boolean isRequest)\u000d\u000a at System.ServiceModel.Dispatcher.DataContractJsonSerializerOperationFormatter.DeserializeBody(XmlDictionaryReader reader, MessageVersion version, String action, MessageDescription messageDescription, Object[] parameters, Boolean isRequest)\u000d\u000a at System.ServiceModel.Dispatcher.OperationFormatter.DeserializeBodyContents(Message message, Object[] parameters, Boolean isRequest)\u000d\u000a at System.ServiceModel.Dispatcher.OperationFormatter.DeserializeRequest(Message message, Object[] parameters)\u000d\u000a at System.ServiceModel.Dispatcher.DemultiplexingDispatchMessageFormatter.DeserializeRequest(Message message, Object[] parameters)\u000d\u000a at System.ServiceModel.Dispatcher.UriTemplateDispatchFormatter.DeserializeRequest(Message message, Object[] parameters)\u000d\u000a at System.ServiceModel.Dispatcher.CompositeDispatchFormatter.DeserializeRequest(Message message, Object[] parameters)\u000d\u000a at System.ServiceModel.Dispatcher.DispatchOperationRuntime.DeserializeInputs(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage4(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage3(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage2(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage1(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet)"}

My jquery looks like this, but i tried changing the actual data which i send as a string serialized json (as you can see) to a pure json object with the same sad result.

$.ajax({
    type: "POST",
    contentType: "application/json; charset=utf-8",
    url: "ajax/Statistics.svc/Get7DaysStatistics",
    dataType: "json",
    data: "{'customerId': '2'}",
    timeout: 10000,
    success: function(obj) { updateStatistics(obj.d); },
    error: function(xhr) {
        if (xhr.responseText)          
            $("body").html(xhr.responseText);
        else
            alert('unknown error');
        return;
    }
});

The wcf service looks like this:

    [SuppressMessage("Microsoft.Performance", "CA1822:MarkMembersAsStatic"), OperationContract]
    public string Get7DaysStatistics(string customerId)
    {
        Debug.WriteLine(customerId);
        return "Test done";
    }

It's placed in a a class with the following attributes:

[ServiceContract(Namespace = "")]
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]

I won't list the configuration in the web.config to keep this long message "short" but i can post it if anybody thinks they can use it - i just want to stress that i CAN call a method and get a result - string or even a json object i can read from as long as i DON'T pass any data to the wcf service.

From stackoverflow
  • I think on your operation you need this attribute:

    [WebInvoke(Method="POST",
               BodyStyle=WebMessageBodyStyle.Wrapped,
               ResponseFormat=WebMessageFormat.Json
    )]
    

    See jQuery AJAX calls to a WCF REST Service for more info.

    Per Hornshøj-Schierbeck : I added the attribute but i still get an error - unfortunately the error i now get is empty so i can't see what the new problem is. I even tried adding "RequestFormat = WebMessageFormat.Json"
  • Use double quotes instead of single quotes in the JSON you are sending to the service. That is, change:

    data: "{'customerId': '2'}",
    

    to

    data: '{"customerId": "2"}',
    

    I've tested this locally and this fixes the problem.

    Incidentally, I debugged this using a method I've often used when calling ASMX and WCF services using libraries other than the built-in ASP.NET tools. I called the service using the client proxy created by an asp:ScriptReference and then inspected the request being sent to the server using an HTTP sniffer (such as HttpFox for FireFox) and compared the request to the one being sent by jQuery. Then you can usually quickly see what is different (and so probably wrong) with the request. In this case, it was clear that there was a difference in the POST data being sent.

    Per Hornshøj-Schierbeck : That's wild - i actually did that before i read your post, but it didn't work. Then i figured perhaps if i removed the WebInvoke attribute davogones posted (and i added) it might just work and it did!
    Per Hornshøj-Schierbeck : After a bit of reflection (on my error) i realize i just suck at serializing json :P My data wasn't correctly formatted json - this is no magic, json needs double quotes and not single. Javascript might not care but json does, so ofcourse it should look like this.
  • Hello .. I did a function in jscript that solved the problem for sending data via POST to a WCF service ... follow the code ...

    function formatJsonDataToWCFPost(d){

    var t = {};
    var a = '{';
    for (var j = 0; j < d.length; j++) {
        var x = d[j];
        for (var i in x) {
            if (x.hasOwnProperty(i)) {
                var c = j + 1 == d.length ? '}' : ',';
                a += ('"' + i + '":"' + x[i] + '"' + c);
            }
        }
    }
    
    return a;
    

    }

Does between in HQL compare strictly or not?

if I write in HQL

A between 5 and 10

is that equivalent to

A >= 5 and A <= 10

or

A > 5 and A < 10

or some other of the 4 combinations?

From stackoverflow
  • I didn't find any specification of the behavior in the Hibernate docs, but the between operator in HQL is translated to the between operator in SQL, which is inclusive.

    So between in HQL is also inclusive, that is

    A between 5 and 10
    

    is equivalent to

    A >= 5 and A <= 10
    
  • obviously there is some confusion regarding this. natural language would suggest it is exclusive, but this is not true. in reality its A >= 5 and A<=10. since there were already contradicting answers given (and delted), there needs to be more clarification: (from http://www.techonthenet.com/sql/between.php)

    Example #1 - Numbers
    
    The following is an SQL statement that uses the BETWEEN function:
    
    SELECT *
    FROM suppliers
    WHERE supplier_id between 5000 AND 5010;
    
    This would return all rows where the supplier_id is between 5000 and 5010, inclusive. It is equivalent to the following SQL statement:
    
    SELECT *
    FROM suppliers
    WHERE supplier_id >= 5000
    AND supplier_id <= 5010;
    

MVP/MVC vs traditional n-tier approach for winform apps.

We have a large suite of apps, most are C# 1.1, but at least 10 major ones are in VB6. We are undertaking a project to bring up the VB6 apps to .NET 3.5.

All the c# 1.1 apps are written using a traditional n-Tier approach. There isn't really any architecture/separation to the UI layer. Most of the code just responds to events and goes from there. I would say that from the point of maintainability, it's been pretty good and it's easy to follow code and come up to speed on new apps.

As we are porting VB6 apps, the initial thinking was that we should stick to the existing pattern (e.g. n-Tier).

I am wondering, whether it's worth it breaking the pattern and doing VB6 apps using teh MVP/MVC pattern? Are MVC/MVP winform apps really easier to maintain? I worked on a MVC-based project and did not feel that it was easier to maintain at all, but that's just one project.

What are some of the experiences and advice out there?

From stackoverflow
  • It moves a thin layer of code you still probably have on the UI. I say thin, because from your description you probably have plenty of code elsewhere. What this gives you is the ability to unit test that thin layer of code.

    Update 1: I don't recommend to re architect while doing the upgrade, the extra effort is best expend on getting automated tests (unit/integration/system) - since you will have to be testing the upgrade works anyway. Once you have the tests in place, you can make gradual changes to the application with the comfort of having tests to back the changes.

  • Dude, if something works for you, you guys are comfortable with it, and your team is up to specs with it. Why do you need to change?

    MVC/MVP sounds good... Then why am I still working on n-Tier myself?

    I think before you commit resources to actual development on this new way of programming... You should consider if it works for YOUR team.

  • If you are porting the VB6 apps vs. a full rewrite, I'd suggest to focus on your Pri 1 goal - to get asap to the .Net world. Just doing this would have quite a lot of benefits for your org.

    Once you are there, you can evaluate whether it's benefitial to you to invest into rearchitecting these apps.

    If you are doing full rewrite, I'd say take the plunge and go for MVP/MVVM patterned WPF apps. WPF willl give you nicer visuals. The MVP/MVVM pattern will give you unit testability for all layers, including the visual. I also assume that these apps are related, so chances are you might be able to actually reuse your models and views. (though, I might be wrong here)

  • MVC in particular does not exclude n-Tier architecture.

    We also have ASP.NET 1.1 business application, and I find it a real nightmare to maintain. When event handlers do whatever they like, maybe tweak other controls, maybe call something in business logic, maybe talk directly to the database, it is only by chance that software works at all.

    With MVC if used correctly you can see the way the data flows from the database to your UI and backwards. It makes it easier to track the errors if you got the unexpected behaviour.

    At least, it is so with my own little project.

    I'll make the point once again: whatever pattern you use, stick to the clear n-Tier architecture. 2-Tier or 3-Tier, just don't mess everything into a big interconnected ball.

  • "Change - that activity we engage in to give the allusion of progress." - Dilbert

    Seriously though, just getting your development environment and deployment platforms up to .NET 3.51 is a big step in and of itself. I would recommend that things like security reviews and code walkthroughs should probably come before re-archecting the application.

    MVC and MVVM are excellent paradimes, particulary in terms of testability. Don't forget about them, but perhaps you should consider a pilot project before full scale adoption?

SharePoint 2007 Site Template Content Types

I am new to SharePoint development. We have created a base site template and have used that template to start new sites in other locations on the same server. This works fine but the newly created site seems to "flatten" the custom content types created in the original site. I would think there would be a way to keep the original content type inheritance intact to help support any necessary modifications on the new site. They can still make the modifications but they take longer because you have to visit each list individually. Does anyone know how to fix this or know a better way to approach this?

From stackoverflow
  • Hi Adam,

    The best way to do this is to create a site definition where the content types are within features whose scope is to the farm.

    Tim

  • The problem you are having is the move from your original site collection to another. The site template does not store the complete definition of a site - only the differences from the underlying site definition. Move from one site collection to another and you lose the underlying site definition and run into problems such as the loss of your content types.

    Site templates and site definitions are two separate customizations you can undertake. You need to make a decision based on your requirements as to which is best.

    Site templates

    Pro's:

    • Easy to create and reuse through the SharePoint interface
    • Ideal for end users
    • A site template is a customization of the underlying default site definition

    Con's:

    • Dependency on underlying site definition means SharePoint updates could break your site templates
    • Poor performance since the modifications are held in the database and are read from the DB and compiled on each request
    • Limited customization options

    Site definitions

    Pro's:

    • Harder to create: involves coding XML
    • Made by developers and site administrators
    • Independent of SharePoint default site definitions so not affected by SharePoint updates
    • Limitless customization options

    Con's:

    • Difficult to modify when deployed
    • Can be cached on the file system so is fast to load

    Check out Google for info on creating custom site definitions.

    Hope this helps!

    Jonny

    Adam Carr : Thanks for the clarification! Huge help!
    Adam Carr : Is there a specific site or resource you used to gather this information? A book you would recommend?
  • An article on Sharepoint Site Templates

    http://dotnetdreamer.com/2009/05/09/introduction-to-site-templates/

libspe vs. libspe2? What's the difference?

I've been doing some experiments with the Cell processor in a PS3 that I have sitting around and I've run across an issue. It appears that there are two versions of the main SPE management library, libspe and libspe2. What is the difference between them? From what I can tell the main difference is that in libspe they rolled their own threading code, whereas in libspe2 you have to integrate the libspe with the pthreads library to achieve multiple threads. Is there any reason that one is better than the other? What is being gained/lossed/changed between the two?

From stackoverflow
  • I don't know about that stuff, I don't see those libraries in the Sony-provided SDK (although I couldn't comment if I had them!).

    Have you checked out IBM's site for information? http://www.ibm.com/developerworks/power/cell/

  • Alright, so it looks like libspe2 is the way to go. What I noted above about threading is pretty much the main difference, libspe2 provides a much more flexible threading model. Also, libspe is deprecated as of SDK 2.1 and 3.1 is the current version, so starting any new projects with libspe is probably a bad idea.

    There is a good article with examples on the IBM site here.

Getting XAML of the current screen in silverlight 2

I am trying to send my dynamically created silverlight 2 page/image to a an ASP.net web service to render it as an bitmap image.

I can see many examples how to get the XAML using javascript (see here) in version 1 of silverlight but I have a few differences.

a) I am using silverlight 2 RC1
b) I have dynamically add controls to the page in c#
c) I would prefer to use c# to get the new XAML to keep all the coed in one place.

does anyone know how to extract the XAML from a control or the page in c#?

Thank you


Update: This is now possible under silverlight 3 using a writable bitmap to save the XAML as a JPEG see my blog post here: http://blog.blueboxes.co.uk/2009/07/21/rendering-xaml-to-a-jpeg-using-silverlight-3/

From stackoverflow
  • Hi John, unfortunately there is not a method for a .ToXaml() on an element tree unfortunately. You can use VisualTreeHelper to build such a method and recurse through a particular element I suppose.

    John : This is now possible under silverlight 3 see post for update
  • Check out this link to see if it will help you at all. http://blogs.vertigo.com/personal/ralph/Blog/archive/2008/11/21/snapshot-of-xaml-control-save-to-image.aspx

  • SilverlightContrib has a XamlWriter class that can extract the XAML from a live control.

    It's free.

    The link is: http://silverlightcontrib.org

    The code would be something like:

    
    // using SilverlightContrib.Xaml;
    
                var cb = new GroupBox();
                StringBuilder sb = new StringBuilder();
                XamlWriterSettings settings = new XamlWriterSettings();
    
                using (XamlWriter writer = XamlWriter.CreateWriter(sb, false, settings))
                {
                    writer.WriteElement(cb);
                }
    
                string result = sb.ToString();