Friday, April 29, 2011

Sharepoint controls in ASP.NET application

Is there a way I can use the sharepoint controls in a ASP.NET web application like any other controls that come out of box for ASP.NET? If yes, what are the pre-requisites I need to install?

Thank you, Harsha

From stackoverflow
  • Yes, you can use SharePoint "controls" in an ASP.NET application, as long as the application is running in SharePoint.

    In other words, the prerequisite is SharePoint.

    Ahmad Farid : how can we do that?
    Harsha : There are so many nice controls in Sharepoint. It would be great if we can use them outside (in ASP.NET application alone)
  • What specific controls are you referring to? Those that you find in SharePoint Designer?

    If you are referring to Web Parts in WSS v3, those that are using the ASP.NET Web Part as the base web part (the recommended approach) may work fine in ASP.NET since the Web Part class inherits from Panel which inherits from Web Control (going from memory here) - all ASP.Net classes. It would just depend on whether the web part has any SharePoint specific code which is highly dependent upon the web part.

  • Host the application in SharePoint's _layouts directory (see this video for more details). Your ASP.NET app will then be "running in SharePoint" and have access to all SharePoint controls.

    Note that some controls don't work unless they are running on an actual SharePoint page.

  • Most controls have internal dependencies on SharePoint (i.e. they use SPContext or SPWeb internally). Also, since they are contained within the Sharepoint Assemblies, you can not just take the .dlls and put them in your app.

    In short: In most cases, it will be better to re-build them using reflector. Which one are you looking at?

Forwarding a Keystroke to another Control in WinForms

I am receiving a keystroke in a Form object's OnKeyDown. When some conditions are met (such as the keystroke is a printable character and not a hotkey) I want to forward the key to a text control on the form and set focus to the text control so the user can continue typing. I am able to decode the character typed using MapVirtualKey but I get only the "unshifted" character (always upper case). Using ToUnicodeEx seems like too much of a PITA.

What is the best way to do this? Isn't there a way to simply forward the Windows message itself?

Can't I intercept the ProcessKeyPreview or some such and forward it to the text control's ProcessKeyPreview? Any ideas along similar lines?

Bump: No answers!

From stackoverflow
  • I did this on a single form with the following

    private void MyFunkyForm_KeyDown(object sender, KeyEventArgs e)
        // to avoid duplicate letters in textbox :-)
        if (textBox2.Focused == false)
         // get the char from the Keycode
         char inputChar = (char)e.KeyCode;
         if (char.IsLetterOrDigit(inputChar))
          // if letter or number then add it to the textbox
          textBox2.Text += inputChar;
         // set focus
         // set cursor to the end of text, no selection
         textBox2.SelectionStart = textBox2.Text.Length;

Spawned child exits with state = 127

Hi, I use posix_spawnp to execute different processes and I check the status (with waitpid) to make sure the child was created properly

    int iRet = posix_spawnp(&iPID, zPath, NULL, NULL, argv, environ);    

 if (iRet != 0)
  return false;

 int iState;
 waitpid(static_cast<pid_t>(iPID), &iState, WNOHANG);
 cout << "Wait: PID " << iPID << " | State " << iState << endl;

 if (WIFEXITED(iState)) {
  printf("Child exited with RC=%d\n",WEXITSTATUS(iState));
 else if (WIFSIGNALED(iState)) {
  printf("Child exited via signal %d\n",WTERMSIG(iState));
  printf("Child is NORMAL");

At first this executes properly and I get the following message:

Wait: PID 15911 | State 0 Child exited with RC=0

After executing the same process several times, the child process starts to exit with status 127.

Wait: PID 15947 | State 32512 Child exited with RC=127

After this happens, I could not get the child to spawn again. I enclosed the section of code given above in a for loop but it wouldn't spawn properly. If I restart the parent process, it works for a while but the same problem crops up again after a while.

What am I doing wrong here?

From stackoverflow
  • Check the return code from waitpid() to be sure that it isn't having problems.

    The way the code reads suggests that you are only spawning one child process at a time (otherwise there'd be no need to call waitpid() within the loop). However in that case I wouldn't expect to use WNOHANG.

    Gayan : The waitpid call returns with a value > 0 which means that there's a valid child.
  • Check this link.

    For example:

    EINVAL The value specified by file_actions or attrp is invalid.

    The error codes for the posix_spawn and posix_spawnp subroutines are affected by the following conditions: If this error occurs after the calling process successfully returns from the posix_spawn or posix_spawnp function, the child process might exit with exit status 127.

    It looks as if it might exit with 127 for a whole host of reasons.

    Gayan : I re-wrote the code using fork and execvp to get a more definite grasp of the error and it turned out that the actual error info is: errno = 14 (bad address) Some digging around revealed that this was because I wasn't ending my argument list with a final entry of "NULL". argv = new char[iSize + 1]; argv[iSize] = NULL; fixed the problem.

Is this the command pattern?


I have a MVP Gui and now I would like to define certain Actions or Commands (Modify, Save, Close, ...) for certain views.

Is there an easy way to do this? Should I provide Commands for each View?

From stackoverflow
  • The easiest way is to have a factory where all your command objects are instantiated. So if you have a open Job Command all the views would goto the factory and pull out the Open Job Command object, instantiate it, and then execute it. If you need to fix a bug or change the Open Job Command there only one spot you have to do it for all the Views.

    With that being said there will be some commands that will probably be unique to each View. Despite that you may want to still encapsulate those in a command object as you can easily implement Undo/Redo with everything going through command objects.

Winforms to WPF conversion: BeginInvoke to what?

Hi all,

Here's my old code from WinForms:

    private void ValueChanged(double inValue1, double inValue2) {
        //only manual mode for this driver, so that's easy.
        if (ValueLabel.InvokeRequired) {
            ValueLabel.Invoke(new MethodInvoker(delegate {
                ValueLabel.Text = (inValue1* inValue2/ 1000).ToString("f1");
        else {
            ValueLabel.Text = (inValue1* inValue2/ 1000).ToString("f1");

Is there an easy way to convert this to be WPF friendly? So far, I have:

   private void KVPValueChanged(double inValue1, double inValue2) {
        if (ValueLabel.Dispatcher.Thread == Thread.CurrentThread){
            ValueLabel.Content = (inValue1* inValue2/ 1000).ToString("f1");
        } else {
            ValueLabel.Dispatcher.BeginInvoke(delegate {
                ValueLabel.Content = (inValue1* inValue2/ 1000).ToString("f1");

But that second 'delegate' call fails. How can I invoke this delegate? I suppose I can go through the whole making a delegate method, making an instance of the delegate method, invoking that particular instance, etc, but I thought the whole point of these anonymous delegates was to avoid that hassle. Plus, my old winforms code has that first implementation all over the place, so I'd really like to avoid having to de-anonymize all my delegates.

Edit: I can try to use the MethodInvoker like I was before, but then the compiler gets confused. MethodInvoker is part of System.Windows.Forms, so using that approach doesn't work. As in:

    private void ValueChanged(double inValue1, double inValue2) {
        if (ValueLabel.Dispatcher.Thread == Thread.CurrentThread) {
            ValueLabel.Content = (inValue1* inValue2/ 1000).ToString("f1");
        else {
            ValueLabel.Dispatcher.BeginInvoke(new System.Windows.Forms.MethodInvoker(delegate {
                ValueLabel.Content = (inValue1* inValue2/ 1000).ToString("f1");

That use of MethodInvoker is not kosher. Is there a separate implementation of it, or some other way to use the same behavior?

From stackoverflow
  • I think you need to change the signature of the delegate:

    ValueLabel.Dispatcher.BeginInvoke(DispatcherPriority.Normal, (ThreadStart)delegate(invalue1, invalue2){
        ValueLabel.Content = ...

    Also, look up using the BackgroundWorker component. Not just for wpf but also for the winform async operations.

    mmr : The methodinvoker call doesn't work...
    mmr : der. forgot to include the windows forms library, which is where method invoker is.
    blue_fenix : sry, spoke too soon before looking back into my Pro WPF book for the real answer :)
    mmr : Yes, it does appear that 'ThreadStart' is the magic word here. Thanks!
  • System.Windows.Forms.MethodInvoker is simply a Delegate that take no parameters and returns void. In WPF, you can just replace it with System.Action. There are also other built-in Delegates that accept parameters, return values, or both.

    In your case,

    ValueLabel.Dispatcher.BeginInvoke(new System.Windows.Forms.MethodInvoker(delegate {
                ValueLabel.Content = (inValue1* inValue2/ 1000).ToString("f1");


    ValueLabel.Dispatcher.BeginInvoke(new Action(delegate() {
                ValueLabel.Content = (inValue1* inValue2/ 1000).ToString("f1");

run interpret c++?

Is there a way i can run c++ code as interpreted instead of compiled? so i can edit code and write functions on the fly?


From stackoverflow
  • Take a look at Ch, an embeddable C++ interpreter.

    Ch is an embeddable C/C++ interpreter for cross-platform scripting, shell programming, 2D/3D plotting, numerical computing, and embedded scripting. Ch is a free and user-friendly alternative to C/C++ compilers for beginners to learn C/C++.

  • CINT (readme) certainly has single-stepping. I'm not sure about modification on the fly, though.

  • Ch and CINT (usually as part of the ROOT system) will interpret C++. However, my experience with CINT has not been good: the language support is not complete (particularly where templates are concerned), the execution is much slower, there has been a history of bugs with e.g. variable scope and loop exiting, and (IMO) it's more hassle than it's worth. As a language, C++ is singularly ill-designed for interpreted use.

    If you need to run interpreted code, why not use a modern interpreted language like Python or Ruby? A tool like SWIG can be used to connect them to existing C/C++ libraries if needed.

    MSalters : "ill-designed" suggests it was designed for such use. I'd say "not designed and ill-suited"
  • This doesnt exactly answer your question, but perhaps it will help.

    The MS C++ compiler supports Edit and Continue, which allows you to stop, make changes, recompile & continue without shutting down you program.

  • I saw a presentation on ccons at CUSEC's demo camp back in January. Its aim is to provide an interactive interpreter like python's. It was in its early stages then but impressed me none the less.

  • Try these:

PEVerify MD Error: 0x8013124C


I get this 'error' when running PEVerify on a custom generated assembly.

[MD](0x8013124C): Error: Method has a duplicate, token=0x06000023. 
[MD](0x8013124C): Error: Method has a duplicate, token=0x06000021. 

Besides this (and 196 others of the exact same error), there are no issues with the metadata and IL. And it works correctly too.

I have been unable to track down where it comes from (as it does not affect assembly in any way).

Google, unfortunately does not reveal much on this error.

Can someone please provide some insight on this 'error' and how it could be caused?

Thanks :)

From stackoverflow
  • It sounds like peverify believes that you have duplicate method rows in the assembly meta data. I read in the comments that you are using Reflection.Emit to generate the assembly. It sounds like it's possible that you are re-using a method definition for generation instead of creating a new one for each method.

  • I solved the problem.

    It is caused by emitting a method with the exact signature of another.


    This goes for any member. Hence, this will likely have the same MD error when run on obfuscated assemblies.

how to create Silverlight application in 2.0

I need to create a web application with silverlight controls how to create this in 2.0 with VS 2005.

Help me out

From stackoverflow
  • Get Started Building Silverlight 2 Applications

  • Also see this previous SO question:Good resource for learning Silverlight 2 Development?

  • You won't be able to develop silverlight control projects in VS 2005. Additionaly you can't use the new Silverlight webcontrol in Vs2005 or on ASP.NET 2.0 since it depends on 3.5.

    Hence you should copy the XAPs and/or XAMLs created elsewhere into your project and treat them simply as content files (place XAPs in 'clientbin' folder). You will need to follow the instructions for using silverlight in simple HTML files in your ASPX.

    What I've done is create my own simple WebControl for ASP.NET 2.0 to generate the appropriate HTML for a Silverlight control. The render method looks some thing like:-

    protected override void Render(HtmlTextWriter writer)
     if (DesignMode)
      //Display something sensible here
      writer.AddAttribute("data", "data:application/x-silverlight-2,");
      writer.AddAttribute("type", "application/x-silverlight-2");
      writer.AddAttribute("name", "source");
      writer.AddAttribute("value", Page.ResolveUrl(Src), false);
      writer.AddAttribute("name", "minRuntimeVersion");
      writer.AddAttribute("value", "2.0.31005.0");
      writer.AddAttribute("name", "initParams");
      writer.AddAttribute("value", InitParams);
      writer.Write(@"<a href="""" style=""text-decoration: none;"">
      <img src="""" style=""border-style: none""/>

    If you are developing for a public site you would need to include the silverlight.js and wire it up as per the normal HTML usage of the control so that it will automatically attempt to install silverlight and refresh the page when installed.

Mapping a ternary association with FluentNHibernate, using IDictionary<TKey,TValue>

I'm trying to map a ternary association using FluentNhibernate.

I have 3 tables:

  • TUser(PK int id, string name, ...)
  • TGroup(PK int id, string name, ...)
  • TRole(PK int id, string name, ...)

And a 4th one that associates them, representing the ternary association:

  • TUserGroupRole(FK int userid, FK int groupid, FK int roleid)

Basically, a user has a particular role for a group. I'm using 3 types in my model :

  • User
  • UserGroup
  • UserRole

In the User class, I want to use an IDictionary<UserGroup,UserRole> to index user roles by groups:

public class User
    public virtual int Id { get; set; }
    public virtual IDictionary<UserGroup, UserRole> Roles { get; set; }
    // ...

Using regular hbm fashioned XML mapping file, I achieved that with a <map> element, like so (pseudo mapping):

<map name="Roles" table="TUserGroupRole">
    <key column="userid"/>
    <index-many-to-many column="groupid" class="UserGroup"/>
    <many-to-many column="roleid" class="UserRole"/>

I wasn't able to figure out how to generate the same mapping using FluentNhibernate, so any help on this point would be very appreciated.

From stackoverflow
  • In response to the reply you got on the FNH group, make sure you are exporting your mapping files like:

    .Mappings(m => m.FluentMappings.AddFromAssemblyOf<MyObject>()

    I tend to agree with that response because I have only ever seen that type of exception when elements are out of order; that is when the key/index or id element is not the first child.

  • You map collection as IDictionary<UserGroup, UserRole< , but how to map collection as IDictionary<string, UserRole< , where key (of type string) is

What to cover in a django introduction talk?

I am taking a session on web application development using django at a local Open Source Community.

The audience has earlier programming experience in Java or C; Some people with php and .net.

What all should I talk about. Hardly anybody would be familiar with dynamic languages. How to cover essential topics of Python?

Are there any specific quotes or pictures that I can put up in the ppt.

From stackoverflow
  • I just did this last week.

    I think it helps to provide some context for Django. View the architecture fro the outside in. Apache - mod_wsgi - Django - template presentation - view function - model.

    I think it helps to look at an HTTP request's life cycle from browser to Apache to mod_wsgi to Django to the URL mapping to the view function to the template rendering.

    I think that Python language features should be minimized. It's an easy language and can be discussed as side-bars.

    Don't forget to briefly (very briefly) mention the essential features of Django -- url mapping, template rendering, ORM, application structuring, built-in admin pages.

    I think that the real heart of the presentation should follow the Django tutorial style.

    1. Look at models first. Cover just enough Python syntax and semantics to explain what the code sample means.

    2. Look at URL mapping and view functions next. Keep these short. Focus on built-in master-detail views since they're so simple.

    3. Look at template code last. Be sure to differential Django templates (which do very little) from JSP pages or PHP (which do too much.)

    I think it's important to not waste much time on Python-isms. The language can spark useless debates on multiple-inheritance, compile-time type checking and the like.

    I think it's important to dwell on the Django unittest features. Those are wonderful. We use them heavily.

  • A couple of things to get the ball rolling:

    • Find a piece of code that's really verbose in a language they're familiar with and rewrite it in Python. If you can show them how Python will make their life simpler rather than discussing it in terms of language features and frameworks, your job will be much easier.
    • Show off the automatic administrator interface. That's what gets everyone (myself included).

    Granted, there's a lot more to be discussed to compare frameworks. But if you lead in with the big guns, you'll get their attention so much more quickly.

    One other anecdote you may find useful: I seem to recall reading about a programmer who was trying to convince his co-workers that Python is the language to go with. There's the old saying "you don't truly understand something until you can explain it to an 8 year old." So he taught his 8 year old daughter to program with Python and showed his coworkers the code. Personally, I would want to learn Python if I saw a presentation like that.

Requirements, Specs, and Managing Up in an Agile Environment

My company has tried to adopt the scrum methodology with mixed success. Theses are some areas where we've had issues. How do you handle these?

  1. Tracking requirements from Product Marketing through to product. We're trying out JIRA to track all requirements individually and assigning a release to each one as it is picked for implementation.
  2. Who creates stories? Product Management who doesn't know enough to create effectively small stories, developers who may not have domain knowledge, an analyst in between?
  3. Functional specs
    1. do you write them or just try to get them into a story definition?
    2. Do you write functional specs per story? Per feature?
    3. How do you see the relationship between functional specs and stories?
  4. answering the question from people with VP in their title "what are we going to get by [8 months from now]?"
From stackoverflow
    1. You should translate your requirements into a Product Backlog. This backlog is what you use to decide what Sprint Backlog items are chosen for each Sprint iteration. Management decides what is on the Product Backlog, but the team needs to agree to what they can produce in the Sprint (this is a negotiation that occurs at every sprint).

    2. Your Product Owner (usually a product manager) drives the creation of the stories. The Stories are simple (as a system admin, I need to be able to add a user). If your product management does not understand your product, you are in trouble.

    3. Agile is about designing as required. The design is never in the story. The spec can be per story, or per feature. You could design all your CRUD inside of one spec, which covers multiple stories.

    4. The Product Owner gets a product demo at the end of every Sprint. So value is demonstrated at every cycle. So your VP would get reports on a monthly basis (ususally 3 weeks of dev + 1 week to prepare for the Sprint demo).

  • Let's see if my take adds anything (not certain by any means...)

    1. I'm not sure about the "assigning a release to each one" thing. I thought the idea was to put a "price" on each story/function point/unit of development and pick what goes into the current sprint. Everything else is backlog - you can offer some indication of remaining effort (see evidence based scheduling in FogBugz) but I don't think you should be allocating to specific sprints - you don't know what'll be in the backlog by the time you get there, for one thing. All you know is that it's going to change, so why waste time on it?

    2. There should be a designated user representative. Or more than one, if domain knowledge can't be concentrated in one individual. But someone from the business domain should be in charge overall of deciding what goes into a sprint, subject to the effort available, of course. There can be a place for a Business Analyst type, but they need to be domain experts. If your user(s) can't write stories, even with your help (it's a co-operative thing, or should be) then you all need help. Consider getting a coach involved for a sprint or two.

    3. You won't be writing functional specs in an Agile environment. You'll be writing code. Your user will be on hand at all times (or you're already exposed to significant risk) and they're your spec. The story tells you "what", and is going to be a small enough unit of work that you should be able to decide on "how" fairly quickly. And refactor. Always refactor. It's not an overhead, it's part of the process and your design won't evolve satisfactorily without it.

    4. If you have VPs (hey, I'm a VP, we're not all bad!) who ask that sort of question, then parts of your company are not getting it yet. Choose someone (the person best able to deal with non-techies, perhaps, or maybe the person least able, since they clearly need the practice) to explain it to them. If what's built is important to them, perhaps their questions are an indication that someone's not as involved as they should be.

  • Some information on how Atlassian (creators of JIRA) use their products to do agile development:

  • If you are going to do anything in regards writing or designing code, one of the things you should always do, is write a spec, irrespective of whatever methodology you are using, wether it is Scrum, XP, Agile or SDLC. Many people who say that writing specs is so unagile and a monument to wasteful bureaucratic paperwork. The simple fact is that they are misguided when they say that code is the spec.

    The clear fact is that a spec allows you to formulate your ideas and designs beforehand, and its much easier to change a spec than it is to change a program, especially if you are working outside the confines of simple LOB application. Specs ensure you have a clearer understanding of what is required when you start coding.

    Its been show time and time again that teams that use specs, design better software. In my opinion, if you hear anybody say the code is the spec, that is dogma, plain and simple, and is storing up huge maintainability problems for the future.

    As an aside, I don't have anything against the Agile Manifesto or light management process centric methods like Scrum. I've used it in the past few years a number of times, and it delivers. I've also seen good software down the drain, where an agile focus would have saved it. But it is no panacea or silver bullet.

    Tom Bushell : I maintain that XP's User Stories are a "spec".
    scope_creep : User Stories aren't specs by definition and have no place as being described as being a spec. A User Story is more akin to a requirement, to say that one or more sentances succintly, rolled together as a requirement, can somehow desribe the complex relationships between objects in a complex distributed system, is like saying that the movement of cylinders and cams in a car engine, is somehow related to how it gets the driver from A to B. There is a relationship but it is indirect.
  • With regard to Functional Specs - Scott Ambler's "Agile Modeling" site has some good samples. There's also a lot of concise, pragmatic advice on Agile requirements in general.

    Worth a look!

Traceable/documented calculations

We are about to redesign our calculation engine, and want each step in the calculation to be traceable/documented so that the user can get a complete report on what amounts make up each total.

Are there any patterns or good examples on how to implement 'traceable' or 'self-documenting' calculations?

From stackoverflow
  • Yes, log the intermediate steps in the calculation as though you were submitting your work to a fussy mathematics professor.

    Vegar : This is the easy way, and most likely the choosen one. Must make the log in a format that would be intepretable for multiple purposes though. I guess a simple textlog want do...
  • You build the expression you wish to calculate as a tree of operators and operands:

    class Node;

    class operator : public node { virtual double eval() };

    class operand : public node ;

    class binary_operator : public operator ;

    class addition : public binary_operator;


    This is the GOF Composite Pattern.

    You build a GOF Visitor that visitor that visits nodes and evals recursively, so that by visiting the root of a expression results in evaling the whole tree and retuirning teh result of the expression.

    You add reporting to the Visitor or a class derived from it. (It's helpful to have several subclasses of Visitor; one may report, another may verbosely log debug info, whatever, another may report in a different format, etc.)

    I put something like this together with JavaCC (Java Compiler Compiler), which took care of generating code to parse an expression (I supplied the expression grammar in modified BNF) and build its expression tree for me; the eval and visitor code I wrote by hand.

    Peter M : I have thought about this sort of idea in the past but with an emphasis on ensuring compatible units were used in each step of the process. I have always wondered if the implementations could be made efficient enough that the overhead would not be noticeable
  • In an RPN-based calculator you could print the top of the stack after each operation.

    Otherwise, if your language allow it, you could override the arithmetic operators. Here's a simple example using Ruby

    class Fixnum
        alias :plus :"+"
        alias :minus :"-"
        alias :mult :"*"
        alias :divide :"/"
        def +(other)
         res = plus(other)
         puts "#{self} + #{other} = #{res}"
        def -(other)
          res = minus(other)
          puts "#{self} - #{other} = #{res}"
        def *(other)
         res = mult(other)
         puts "#{self} * #{other} = #{res}"
        def /(other)
         res = divide(other)
         puts "#{self} / #{other} = #{res}"
    # Example
    (1+9) * 10 / 5 + 99 - 3 
    1 + 9 = 10
    10 * 10 = 100
    100 / 5 = 20
    20 + 99 = 119
    119 - 3 = 116

    Another solution is to have a proxy that intercepts all the arithmetic operations and perform the proper logging.

    Vegar : As a delphi-developer, this idea is a little out of reach.. I like the idea though...

Warning as Error, but not all

I would like to enable Warning as Error on our current project/solution for obvious reasons.

There are several warnings that should NOT be handled as an error, eg Obsolete, and using #warning directives.

Is this possible?

I see that I can make specific warnings behave as errors, but I would really like the 'invert' of that.

The closest I can get is disabling the 2 above mentioned warnings, but then there will be no 'warning' for them either.

Any suggestions?

To clarify:

I want the warnings, just not as an error. So all warning except for the above mentioned exceptions will behave as an error, and the above mentioned will be warnings (ones I can see in the compiler results).

From stackoverflow
  • The warnaserror compiler option supports erroring only on specific warnings. You can thus specify all warnings to be shown as an error, then disable the errors for certain warnings. Using the page's example as a guide:

    leppie : Yes, that is what I want, not sure if that works, but will try :) Thanks!
  • It is possible in VS2005 assuming you are using C#.


    In Visual Studio 2005, you have a couple more options to control this. Now, you have 3 options for treating warnings as errors: All, None, or Specific Warnings, where you can provide a semi-colon separated list of error numbers.

    It is also possible to do it with GCC with the option -Werror=

    leppie : I want to opposite, see stragers answer, he seems to understand the issue :)

SourceSafe CRC calculation

Does anyone know what CRC checksum calculation is used in Microsoft SourceSafe? I want to calculate a checksum locally and compare it to the SourceSafe checksum.

I am using a CRC algorithm I found on the internet, but the seed or polynomial for the algorithm seems to be different for SourceSafe.

From stackoverflow
  • See this post:

    The blurb you may be looking for is:

    And the 16-bit CRC is mostly the standard algorithm. The one difference is that in my experience, CRCs typically start off by initializing the state to -1 (0xFFFFFFFF), accumulating, then returns the logical-NOT of the result. However, the VSS CRC logic initializes state to 0, and does not apply a logical-NOT at the end. Make certain you're using this technique when verifying any CRCs in the file. (Refer to VssCrc32() in CRC32.c for a working implementation.)

    The code he mentions is included in this zip file:

    Rine : Thanks a lot, I converted the C++ code to C# code and it works.

What is Application Domain in .NET and what is its need?


What is AppDomain

What is Application Domain in .NET and what is its need?

See Also:

Usage of AppDomain in C#

From stackoverflow

Serializable Class Collections

I have a class that contains a list of properties which serializes just fine. However I need one of those properties to contain a collection of another class to give me sub classes

Example XML


I'm not sure how to accomplish that when I serialize the main class.

Currently I use this command to serialize my class

Dim oXS As XmlSerializer = New XmlSerializer(GetType(VideoOptions))

        Dim strW As New StringWriter
        oXS.Serialize(strW, Me)
        VideoOptionsAsXML = strW.ToString
From stackoverflow
  • Please check this, copy pasting the lot wouldn't be fair. This wil help you out, I think.

    XML Serialization of Collection

  • You would simply have to have a property on the VideoOptions class which is a collection of Property3

    Create a collection class as below.

    Public Class Property3Collection
        Inherits CollectionBase
        Public Function Add(ByVal item As Property3Item) As Integer
            Return Me.List.Add(item)
        End Function
        Default Public ReadOnly Property Item(ByVal index As Integer) As Object
                Return CType(Me.List.Item(index), SingleItem)
            End Get
        End Property
    End Class

    Have a class for your item

    Public Class Property3Item
    'Add in Item property
    End Class

    build up your Property3Collection Class

    Public Sub AddPropertyItem
       Dim newCol as New Property3Collection
       newCol.Add(New Property3Item)
    End Sub

    Add the property to your VideoOptions class

       Public Property Property3() As Property3Collection
            End Get
            Set(ByVal Value As Property3)
            End Set
        End Property

    As long as the Property3Item has a constructor with no params (needed for xml serialization) the Xmlserializer class will serialize and deserialize this to the format you specified without a problem.

    Hope this helps

Sorted Combination of Multiple Lists

Consider L1, L2, L3 as lists containing n1, n2 and n3 integers in sorted order respectively.

Task is to construct a sorted list L such that,

L[0] = L1[0] + L2[0] + L3[0]
L[i] = L1[i1] + L2[i2] + L3[i3]
L[n1 * n2 * n3] = L1[n1] + L2[n2] + L3[n3]

But n1, n2, n3 are very large and therefore L cannot be constructed in one go and then sorted.

Therefore the list is to be constructed in stages and such that we can display k top integers and save the state of computation to resume by computing [k+1]th top integer.

What all data structures and algorithms can be used to achieve the objective?

From stackoverflow
  • Can't you just use a modified merge sort, since you already have three sorted lists? (By "modified" I mean something that takes advantage of the fact that you know that each input list is already sorted.)

    Assuming you cannot use a merge sort directly, as you don't want to compute, in memory, the entire newly merged sorted list, how about you this: Use a modified merge sort where you calculate the first group of merged entries and display those, maintaining the pointers used in the merge sort. You just persist where you are in each list, one pointer to the current location in each list, and pick up where you left off for each chunk.

  • Ok, I will be maybe flamed by this answer. But since you only need the algorithm, the best solution would be to trasversed every list at the same time building the result list with the best element (in this case the lower, or the one you like in a tie). With this method, you have 4 positions, one for every list you are trasversing and the last one point could be pointing to the position in the result list that you need to insert (or the last position inserted). With this, the only structure you need is a list.

    I see a problem with merge sort in this case. The data you are showing could be not the exact data (since you need to sort the next portion, and that could be merged with the current one).

  • OK, first an example in two dimensions:

        1  2  3
    1   2  3  4
    5   6  7  8
    7   8  9 10

    You start in the top left corner, obviously, and put the value into the result list. Next, you have to add all candidates that are reachable (through incrementing exactly one index) from there to some sort of sorted collection, here, that is the cells with the values 3 and 6. Then you take the lowest member out of that collection, put its value into the result list, add all candidates reachable from there that are not yet in the collection into that, and so on.

    You will need:

    • a data structure holding a candidate, with all indices and the result value (I represent that below as "((i1 i2) value)").
    • a data structure for a collection of candidates, sorted by value. A heap seems ideal for that.

    You will have to make sure that all candidates are unique by their indices when you put them into the collection. The values are not necessarily unique, but the heap should be sorted by them. Since a given set of indices always produce the same value, you will have to check for uniqueness of the indices only when you encounter that value while inserting into the heap. It might be an optimization to make the nodes of the heap not single candidates but a list of candidates with the same value.

    Doing this with the above example: First, the result list is (2). The candidates are ((1 2) 3) and ((2 1) 6). Take the candidate with the lowest value out, put the value into the result list -> (2 3), find all new candidates' coordinates -> (2 2) and (1 3), calculate their values -> ((2 2) 7) and ((1 3) 4), put them into the candidates' heap (serialized representation here) -> ((1 3) 4) ((2 1) 6) ((2 2) 7), lather, rinse, repeat.

    In tabular form:

    result-list          candidates
    (2)                  ((1 2) 3) ((2 1) 6)
    (2 3)                ((1 3) 4) ((2 1) 6) ((2 2) 7)
    (2 3 4)              ((2 1) 6) ((2 2) 7) ((2 3) 8)
    (2 3 4 6)            ((2 2) 7) ((3 1) 8) ((2 3) 8)
    (2 3 4 6 7)          ((3 1) 8) ((2 3) 8) ((3 2) 9)
    (2 3 4 6 7 8)        ((2 3) 8) ((3 2) 9)
    (2 3 4 6 7 8 8)      ((3 2) 9) ((3 3) 10)
    (2 3 4 6 7 8 8 9)    ((3 3) 10)
    (2 3 4 6 7 8 8 9 10)

    I don't see a better way at the moment. The heap seems to need a number of nodes in the magnitude of the sum of n1, n2, and n3.

Reasons not to code a program?

Let's play the devil's advocate: what would be the reasons you would give management NOT to code a solution, but purchasing an expensive package?

From stackoverflow
  • Simple; BUY when the Total Cost of Ownership (TCO) for writing your own is greater than the TCO of the package. (How you reliably work out the TCO of writing your own is an exercise left for the reader ;-) )

    Not-so-simple; DIY when the software is your core business, or when the software is a unique selling point. You don't want to outsource your brain or your heart.

    Bob Cross : For those that don't know - you should define TCO at least once.
    Ed Guiness : updated with link
  • A very similar program exists already. Unless you're doinng it for fun / learning purposes

    1. Cheaper to buy in.
    2. If the domain is unfamiliar to the existing developers
    3. If the level of risk is unacceptably high
    4. Time criticality.

    Though at the end of the day everything boils down, in one way or another, to point 1.

    Gamecat : And even if 4 out of 4 apply... (sigh)
    : An addendum to 1. It could possibly also include support, which means you won't have to spend company resources fixing it after delivery either.
    Charles Bretana : Last point says it all... convince them it's cheaper... How to do that, of course, there's the rub...
  • Money Money Money

    0xA3 : In a rich man's world only.
    : (-1) You spend money either way. Your answer is not clear.
  • The cost and time involved in developing the new solution is high, compared to the cost and time involved in buying the solution. Usually management is very sensitive to these two factors.

  • It will be cheaper, but only if you are prepared to modify your business practices to fit in with the way the package works. Performing loads of customisation on the package to fit your business practices almost always ends up costing more than creating a custom app from scratch, and often ends in tears.

  • Advantages to buy a component: - It might be cheaper to buy it than to develop it - (in some case) the team doesn't have the knowhow to develop that component. So the time to gather that knowhow is prohibitive - Maintenance costs transfered to the supplier

    Disadvantage - No control of the lifecycle of the component (including when new releases are made, what fixes should be done, etc.) - It may require adaptation/integration effort. - It may introduce performance penalities due to integration issues.

  • There are a lot of reasons to buy in software. In my opinion the most important are:

    • Lack of knowledge and expertise within the organization
    • The lack of manpower to quickly adapt to changes in market or law (taxation)

    Advantages of Outsourcing:

    • More easy to budget
    • No need to hire, train personnel
    • Ability to have tight Service Level Contracts
  • I've seen quite a few instances of this, and I think it seperates into two halves. Firstly, there's your "commodity" software - messaging middleware, databases, etc., - that typically is always bought in. I would never sit down and write my own asynchronous messaging system, unless that was absolutely core to my business. Secondly, there is the value-added portion, which I think is rather different.

    I work in finance, and there are a few vendor systems (examples are Murex, Summit and Sophis) that perform risk/booking/back-office functionality for various financial market products. I think that choosing these is dangerous, for two reasons.

    First reason is that you are no further ahead of your competitors in terms of software, you're adding no value of your own, so it just becomes a "race to the bottom" in terms of what price you can charge, or how much risk you can take.

    Second reason, and more important from a software developer's point of view, although the vendor's system might suit you now, it may not suit you in two or three years time. If you've built your business on top of it, and suddenly it changes - or doesn't change when you need it to - you can be left high and dry. Or, if the company goes bankrupt or wants to move out of the market, you've got two options - buy it, or re-write all of your systems from scratch.

    I have lost count of the number of firms I've seen who are desperately trying to switch off value-add vendor systems (typical examples are Murex, Sophis, Summit...see above :) and write their own.

    A supplementary argument against vendor systems for value-add is that the consultants/contractors are typically a lot more expensive. A senior c# consultant can be had here for £x00 a day. A consultant with vendor system experience will be around 20-25% more.

  • If you were to build your own car from scratch, it would probably be very expensive and quite unrealiable, but if you were to buy a ready made car, it'd probably be a lot cheaper and be a lot more realiable.

    The same is true of software (most of the time), every bit of bespoke code needs to not only be developed, but also tested and supported. If you can fit a product into the client's requirements it can generally be a more cost effective solution. However, I think problems occur when a product is shoe horned into a set of requirments which demand something which it wasn't designed for.

  • Some reasons

    • The solution does not support a core business process, but a commodity process such as Finance, HR, Facilities management, where you are no different from your competitors (your core/noncore processes will vary!). Your internal development skills would be better used to create and maintain solutions that give your company a competitive edge.
    • The former applies in spades if the solution does not need to be heavily integrated with your existing or future solutions (it supports a relatively isolated process).
    • No need to budget and staff for maintenance throughout the lifecycle of an in-house developed application. The numbers vary, but one figure I've seen and find quite believable is that the initial development costs of a custom application accounts for only one-tenth the total lifecycle cost. Granted, that includes stuff like end-user training, O/S upgrades and integration which also hits an externally procured solution, but a 10x factor on the initial price tag will make most management take pause.
    • A special case of the former: skills for development, maintenance and operations may be lacking or a bottleneck in your organization. Every skill shortage can be filled with enough time and money, but not necessarily within acceptable boundaries. Every additional technology skill your staff needs to acquire means less time to develop and maintain existing skills: the skills cost of a diverse technical landscape grows at a more than linear rate...
    • As e.g. Peter states: predictable but high costs can trump the risks of starting a development project, in a business where 20% cost overrun is often considered a successful project, whereas unsuccessful projects have basically no limit...
    • As vatine also notices: commercial software can be had now, delayed only by installation and end user training. Granted, installation of something like SAP isn't done in a day...

    This applies if your vendor is reasonably stable. If they are small or have weak finances, commercial software can be a much greater gamble than in-house development.

    Whatever way you go, you get lock-in effects. It's never free and seldom cheap to migrate away from an application that actually finds use.

  • A one-shot in-house development can be justified to replace subscription software, i.e., a one-off development cost of 2X can be justified to replace a yearly subscription of X, if you have the development resource available. It can also mean that the final solution is exactly targeted at the business needs, whereas a third party solution might not match exactly or require extra support or workarounds. This applies to projects that I have worked on in the past year or two, with a benefit that the new system is more accurate and hence led to a requirement for fewer support people. Yes, good software makes people redundant.

    In-house development should also be done for core company software, the software that allows you to exist as an entity so that you are not hung out to dry the next time your software support contract or subscription comes around, or by a solution that isn't perfectly suited to your needs.

    However one-off costs for essential software that isn't providing a core function is easy to justify, whether it is a vast amount of money for Oracle, to a decent CMS, CRM or customer ticketing system that you need by last week.

    Pontus Gagge : Unfortunately, one-off costs don't exist. You always have subsequent support and maintenance costs, whether they are visible or not.
    JeeBee : Yes, but the external company support/maintenance costs are far costlier unless you have an internal IT bidding/billing setup (because they were tired of being called a cost centre). In this case you can simply ignore the ongoing costs because the major cost is the recurring subscription fee.
  • Vastly simplified, but select the smallest of:

    • For a home-brewn tool: cost = (task_execution_time * uses * $/user-h + tool_development_time * $/dev-h
    • For a purchased tool: cost = task_execution_time * uses * $/user-h + tool_purchase_price
    • Without a tool: cost = task_execution_time * uses * $/user-h

    cost is the cost of performing the task uses number of times. task_execution_time is the (person-)time it takes to perform the task including any overhead, tool_development_time is the time it takes to develop the tool.

    I'm leaving out a lot of variables to keep it simple. Add more to suit :)

  • I'd give them the true reasons (assuming I was in a company I actually wanted to work for, that is, rather than having been enslaved by PHBs). Usually this is "I'll get my job done faster if I have this, and my time is worth a lot to you".

    But being more helpful: if the question is, "when do situations arise where you want to pay for software?", then:

    • When the software in question isn't the core competence out of which the company makes its money. In the case of non-software companies, then the same thing but with "department" instead of "company" and "justifies its existence" instead of "makes its money". So we're going to get a package in, the question is whether to get the expensive one.


    • When no good free option exists, or where the expensive software is better than the free due to better features, lower TCO (on account of not having to fix the bugs yourself), or whatever.


    • When the package isn't just software, it's a hosted service, and the cost to the organisation of training someone to install the service and keeping a whole person on-call 24/7 to maintain it would be even more than the "expensive" package. This is another part of TCO, but it applies even if the software itself costs the same either way.

    What the first point means, is that games companies shouldn't be writing their own IDEs, web app companies shouldn't be writing their own compilers, banking software departments shouldn't be writing office applications, etc, unless you really think that you're better off doing that than writing games, or web apps, or trading software, or whatever you have actual deadlines on. Obviously there are some exceptions: if you want a tiny little script or webapp it may be easier just to write it than to find something off the shelf that does what you need.

    In the second point, "better" means different things to different organisations. Assuming equal functionality, if your department is full of linux geeks then the free option probably has more plus-points than if you have no linux geeks, and also need an SLA on the package in question. If the "package" is going to be compiled into your non-free product, as opposed to just supporting your development, then obviously free-as-in-GPL software is useless to you whereas free-as-in-MIT would probably be fine. And so on.

  • It would depend in part on what management wants the developers to code.

    If it's a game development company and they're rolling their own email solution, that would be a waste. Talented game developers coding their own SMTP client and server system rather than doing much more enjoyable game development (not to mention that this would probably go against whatever semblance of a "mission statement" the developers might have had in mind when they got hired on).

    If instead it's coding some small utility or other it could be worthwhile. If there are interns, that could be a good project to throw to them to start on, or if some of the developers get sick and tired of game physics and want to do something more bland, they could work on it for a while.

  • When I wanted to code something myself and my old boss thought we should buy it, he'd say, "buy the best, make the rest."

    Another rule of thumb: when it's one of your business's core offerings, do it yourself. If it's not, consider outsourcing or purchasing from someone where that is there core offering. They'll do it better anyway.

  • I think an analogy would be suiting.

    Let's say you have an idea for the newest race car engine that will instantly make any car go 20 mph faster. You were able to do this because you understand intimately and at the quantum level how to optimize engines so that they work better. However, your engineering skills in other areas, such as designing a frame or wheels, is just average. Are you going to spend your time trying to improve these skills and reinvent the wheel (or frame as it were?)? No.

    Instead, you're going to team up with someone who makes the best tires and someone who makes the best frames and combine your engine with them. This saves you time and money since you're focusing on your product, not providing supporting infrastructure.

    This also applies to software. If tools you need are very complex, get them from somewhere else, because you want to focus on making your final product, not the infrastructure to allow you to make the product.

Check whether content already sent in PHP

Is there a smart way to check whether output has already been sent to the browser and sending a header would generate a PHP_WARNING?

Obviously there's the alternative of using an output buffer, but that's not always an option.

From stackoverflow
  • If all you want is to hide the warning, just turn off error reporting:

    $old_er = error_reporting(0);

    Or, you can redirect PHP errors and warnings to a log file (which is preferable in production, IMO).

  • You can use the headers_sent() method. This because before anything is outputted, the headers will be send first.

    Alan : I forgot that all headers are sent at once. You're right.
  • headers_sent()

  • if (headers_sent())
        trigger_error("Here you got your warning", E_WARNING);