Thursday, April 14, 2011

Is there a way to substring a string in Python?

Is there a way to substring a string in Python, to get a new string from the 3rd character to the end of the string?

Maybe like myString[2:end]?

EDIT: If leaving the second part means 'till the end', if you leave the first part, does it start from the start?

From stackoverflow
  • >>> x = "Hello World!"
    >>> x[2:]
    'llo World!'
    >>> x[:2]
    'He'
    >>> x[:-2]
    'Hello Worl'
    >>> x[-2:]
    'd!'
    
    Paolo Bergantino : oo, I was about to add more examples, but I see it was done for me. Thank you kind sir.
  • Yes there is. Your example is very close:

    myString[2:]
    
  • myString[2:] .. leave off the second index to go to the end

  • mystring[2:]

  • You've got it right there except for "end". Its called slice notation. Your example should read.

    new_sub_string = myString[2:]
    

    If you leave out the second param it is implicitly the end of the string.

  • Besides the direct answer that others have given, you can find all the other rules for slicing behavior explained in the Strings section of the official tutorial.

    tgray : +1: for having a link to more information :-)
  • One example seems to be missing here: full (shallow) copy.

    >>> x = "Hello World!"
    >>> x
    'Hello World!'
    >>> x[:]
    'Hello World!'
    >>> x==x[:]
    True
    >>>
    

    This is a common idiom for creating a copy of sequence types (not of interned strings). [:] Shallow copies a list, See python-list-slice-used-for-no-obvious-reason.

    Joan Venge : Does this create a new copy?
    gimel : A new copy will be created for lists - see edited answer.

Localized group name

I want to configure a Mutex access rule by assigning privileges to the "Everyone" group. When I create my rule it looks something like the following

new MutexAccessRule("Everyone", MutexRights.Modify | MutexRights.Synchronize | MutexRights.TakeOwnership | MutexRights.ReadPermissions, AccessControlType.Allow)

How do I get the localized "Everyone" group name so this will work on a non english version of the OS.

Thanks

From stackoverflow
  • Rather than using the group name, use the underlying SID, which is not localised.

    var rule = new MutexAccessRule(new SecurityIdentifier(WellKnownSidType.WorldSid, null),
                                   MutexRights.Modify
                                    | MutexRights.Synchronize 
                                    | MutexRights.TakeOwnership
                                    | MutexRights.ReadPermissions,
                                   AccessControlType.Allow)
    

Can someone explain about Linux library naming?

When I create a library on Linux, I use this method:

  1. Build: libhelloworld.so.1.0.0
  2. Link: libhelloworld.so.1.0.0 libhelloworld.so
  3. Link: libhelloworld.so.1.0.0 libhelloworld.so.1

The versioning is so that if you change the public facing methods, you can build to libhelloworld.so.2.0.0 for example (and leave 1.0.0 where it is), so that applications using the old library won't break.

However, what's the point in naming it 1.0.0 - why not just stick with libhelloworld.so and libhelloworld.so.1?

Also, is it best practice to name your library using 1.0.0 for example, or just 1?

g++ ... -Wl,-soname,libhelloworld.1

Or:

g++ ... -Wl,-soname,libhelloworld.1.0.0
From stackoverflow
  • The primary advantage of this method is easily letting users know which version of the library they have. For example, if I know a bug I'm getting was fixed in 1.0.4 I can easily check what version of the library I am linking against, and know if that's the right way to fix the bug.

    This number is referred to as the "shared object version" or "soversion" and is part of the ELF binary standard. IBM has a good overview of ELF at http://www.ibm.com/developerworks/power/library/pa-spec12/.

  • The way you're supposed to form the x.y.z version is like this:

    1. The first number (x) is the interface version of the library. Whenever you change the public interface, this number goes up.
    2. The second number (y) is the revision number of the current interface. Whenever you make an internal change without changing the public interface, this number goes up.
    3. The third number (z) is not a build number, it is the backwards-compatability count. This tells you how many previous interfaces are supported. So for example if interface version 4 is strictly a superset of interfaces 3 and 2, but totally incompatible with 1, then z=2 (4-2 = 2, the lowest interface number supported)

    So the x and z numbers are very important for the system to determine if a given app can use a given library, given what the app was compiled against. The y number is mainly for tracking bug fixes.

  • From an old email I sent to a colleague about this question:

    Let's look at libxml as an example. First of all, shared objects are stored in /usr/lib with a series of symlinks to represent the version of the library availiable:

    lrwxrwxrwx 1 root root     16 Apr  4  2002 libxml.so -> libxml.so.1.8.14
    lrwxrwxrwx 1 root root     16 Apr  4  2002 libxml.so.1 -> libxml.so.1.8.14
    -rwxr-xr-x 1 root root 498438 Aug 13  2001 libxml.so.1.8.14
    

    If I'm the author of libxml and I come out with a new version, libxml 2.0.0 that breaks interface compatiblity with the previous version, I can install it as libxml.so.2, and libxml.so.2.0.0. Note that it is up to the application programmer to be responsible about what he links to. If I'm really anal, I can link directly to libxml.so.1.8.14 and any other version will result in my program not running. Or I can link against libxml.so.1 and hope that the libxml developer doesn't break symbol compatibility on me in the 1.X version. Or if you don't care and are reckless, just link to libxml.so and get whatever version there is. Sometimes, when enough people do this, the library author has to get creative with later versions. Hence, libxml2:

    lrwxrwxrwx 1 root root     17 Apr  4  2002 libxml2.so.2 -> libxml2.so.2.4.10
    -rwxr-xr-x 1 root root 692727 Nov 13  2001 libxml2.so.2.4.10
    

    Note that there's no libxml2.so in this one. Looks like the developer got fed up with irresponsible application developers.

    ephemient : Umm, false. When you give `-lxml` to the dynamic linker, it will use the *library soname* in `libxml.so`, which would be `libxml.so.1`, so that's what the generated executable links against. ...(continued)
    ephemient : (continued)... Linking directly to `libxml.so` or `libxml.so.1.8.14` *does not* happen when the library is properly versioned. The `lib*.so` symlink is not needed for running executables, only developing them, so you may find that they are often placed in separate binary packages.
    ephemient : To see the library soname, use `readelf -d /usr/lib/libxml.so` (assuming you're on a development system with binutils installed).

What's the advantage of this indirect function call?

I found the following code in a library:

class Bar {
public:
  bool foo(int i) {
   return foo_(i);
  }
private:
  virtual bool foo_(int i) = 0;
};

Now I'm wondering: Why would you use this indirection? Could there be any reasons why the above would be better than the simple alternative:

class Bar {
public:
  virtual bool foo(int i) = 0;
};
From stackoverflow
  • This is often called a Template-Hook pair (a.k.a Hotspot), coined by Wolfgang Pree.

    See this PDF, PowerPoint, HTML

    One reason for doing the indirection as you call it is that things often can/has to be setup prior a method, and some cleaing post a method call. In the subclasses you only need to supply the necessary behaviour without doing the setup and cleaning...

  • If a subclass could change the definition of foo_?, but the consumers needed a static function (for efficiency)? Or for a delegation pattern?

  • This is the template pattern. The foo method contains code that must be executed by all subclasses. It makes more sense when you look at it like this:

    class Bar {
    public:
      bool foo(int i) {
       // Setup / initialization / error checking / input validation etc 
       // that must be done for all subclasses
       return foo_(i);
      }
    private:
      virtual bool foo_(int i) = 0;
    };
    

    It's nicer than the alternative, which is to try to remember to call the common code in each subclass individually. Inevitably, someone makes a subclass but forgets to call the common code, resulting in any number of problems.

  • This is the Non-Virtual Interface Idiom (NVI). That page by Herb Sutter has a good bit of detail about it. However, temper what you read there with what the C++ FAQ Lite says here and here.

    The primary advantage of NVI is separating interface from implementation. A base class can implement a generic algorithm and present it to the world while its subclasses can implement the details of the algorithm through virtual functions. Outside users are shielded from changes in the algorithm details, especially if you later decide you want to do add pre- and post-processing code.

    The obvious disadvantage is that you have to write extra code. Also, private virtual functions are confusing to a lot of people. Many coders mistakenly think you can't override them. Herb Sutter seems to like private virtuals, but IMHO it's more effective in practice to follow the C++ FAQ Lite's recommendation and make them protected.

LastChildFill Not Working When Trying To Dock ItemsControl

I've got the following markup in a WPF UserControl:

<Border Name="_border" BorderThickness="4" BorderBrush="Blue">
    <Canvas Name="_canvas" Background="Black" >
        <DockPanel LastChildFill="True">
            <ItemsControl Name="_itemsControl" Background="Bisque" AllowDrop="True" Height="100" Width="100"
                      HorizontalAlignment="Stretch"
                      VerticalAlignment="Stretch"
                      ItemTemplate="{StaticResource pictureTemplate}"
                      ItemsPanel="{StaticResource panelTemplate}"
                      Drop="_itemsControl_Drop" 
                      DragOver="_itemsControl_DragOver" 
                      DragLeave="_itemsControl_DragLeave" 
                      PreviewMouseLeftButtonDown="_itemsControl_PreviewMouseLeftButtonDown"
                      PreviewMouseMove="_itemsControl_PreviewMouseMove">

        </ItemsControl>
             </DockPanel>
    </Canvas>
</Border>

I would like the ItemsControl to fill all of the available space, but it is not obeying the DockPanel's LastChildFill property. The Horizontal and Vertical "Stretch" values aren't helping either. What am I missing?

From stackoverflow
  • Whats the size of your DockPanel? Try setting a background on your DockPanel for testing.

    I don't think that you problem is with your ItemsControl not stretching to fill your DockPanel, but actually your DockPanel is not stretching to fit inside the Canvas Control. Canvas control and its children will not resize to fit thier parent.

  • If you remove the Canvas completely, you will not have this problem. Once you introduce a canvas, you need to set size and position of elements in the Canvas (like the DockPanel).

ASP.Net FileUpload Control with Regex Validator postback problem

I'm trying to use a .Net FileUpload control along with a Regex Validator to limit filename to JPG, GIF, or PNG extensions. After postback, the filename is gone from the control (as expected), but this seems to cause the validator to fire and display its error text.

Can anyone suggest a fix, or a better way? Thank you!

From stackoverflow
  • Use a custom validator to do this check and call Page.IsValid in the method that handles the upload which will stop the processing of the upload if the file does not have the valid extension.

  • Just use a custom validator with the following javascript function:

    function UploadFileCheck(source, arguments)
    {
        var sFile = arguments.Value;
        arguments.IsValid = 
           ((sFile.endsWith('.jpg')) ||
            (sFile.endsWith('.jpeg')) ||
            (sFile.endsWith('.gif')) ||
            (sFile.endsWith('.png')));
    }
    

    The custom validator code:

    <asp:CustomValidator ID="cvalAttachment" runat="server" ControlToValidate="upAttachment" SetFocusOnError="true" Text="*" ErrorMessage="Invalid: File Type (allowed types: jpg, jpeg, gif, png)" ClientValidationFunction="UploadFileCheck"></asp:CustomValidator>
    

    That should be all your need to stop it client side before it gets posted back.

  • JavaScript has no "endsWith", so use this code for the custom validator:

    function UploadFileCheck(source, arguments) { var sFile = arguments.Value; arguments.IsValid = ((sFile.match(/\.jpe?g$/i)) || (sFile.match(/\.gif$/i)) || (sFile.match(/\.bmp$/i)) || (sFile.match(/\.tif?f$/i)) || (sFile.match(/\.png$/i))); }

How to create a HTTP request listener Windows Service in .NET

I want to create Windows Service that acts as a HTTP listener and can handle around 500 clients. Are there any special considerations for this kind of service.

I am a little confused between the HTTPListener class and the TCPListener class. Which one to use for a Windows Service that will:

  1. Accept the client connection (around 500)
  2. Process client request
  3. Call another Http based service
  4. Return some value to the calling client


This is what I am doing to start the listener.

    listener = new HttpListener();
    listener.Prefixes.Add("http://localhost:8080/");
    listener.Start();
    listener.BeginGetContext(new AsyncCallback(OnRequestReceive), listener);

private void OnRequestReceive(IAsyncResult result)
{
     HttpListener listener = (HttpListener)result.AsyncState;
     // Call EndGetContext to complete the asynchronous operation.
     HttpListenerContext context = listener.EndGetContext(result);
     HttpListenerRequest request = context.Request;
}

Will I be able to handle N clients simultaneously?

From stackoverflow
  • I'd use HttpListener class. It does many stuff for you and you wouldn't need to reinvent the wheel. Also it integrates with kernel mode http.sys driver in Windows Server 2003 and should offer better performance.

  • If you need to be able to highly responsive HttpListener will not scale very well (you cannot accept more than one connection at a time). But for small scale use, being careful to get context and write the response asynchronously it can work.

    EDIT: Appears I misunderstood HttpListener's ability to accept connections. You can accept multiple connections concurrently (see comment). However IIS's other facilities to host code would avoid reinventing the wheel. So unless there are very specific requirements that preclude IIS, why not take the easy path?

    For serious use, use IIS, creating an Http Handler (a class that implements IHttpHandler or IHttpHandlerFactory, or even an IHttpAsyncHandler) to handle the requests (this is the underlying type that implements .aspx et al).

    The right solution really depends on what you mean by "handle around 500 clients": one at a time or all at once?

    Based on the comment answer: 500 at once, and noting that the processing, in step 3, includes another HTTP call, I doubt using HttpListner will be able to handle the load without ensuring every operation is asynchronous (getting context and request, performing the onward HTTP request and then sending the response)... which will lead to some more difficult coding.

    Unless there is a very good reason to avoid IIS, using IIS will be easier as it is designed to support large scale client request loads with rich functionality, whereas HttpListener "Provides a simple, programmatically controlled HTTP protocol listener.".

    EDIT: Expanded based on more details of load.

    A9S6 : I mean 1-500 clients can be connected to that Service at any given time. Also, the processing time of each client will be small.
    Darrel Miller : Can you provide some source that claims that HttpListener cannot accept more than one connection at a time? I have running code using HttpListener that appears to me to be handling multiple requests concurrently. By calling BeginGetContext multiple times you effectively setup multiple listeners. Also, HttpListener is based on http.sys which is identical to what IIS uses for serving requests, so the performance should be equivalent.
    Richard : Added note to that effect, thank's for the update.
  • I created a TcpListener and started the listener using BeginAcceptTcpClient method. Each incoming request is taken to a separate thread for processing where the client connection is used continuously to get/send data from client. It seems to be working fine for large number of clients.

Refactoring of very combine code

I now have to refactor some code, it's basically one method(its around 1000 lines of code) that does a lot of calculations and have a lot of variables. I'm not sure how to refactor it. Does code like

...
calculateSth(param1,param2,param3,param4,param5, params6);
calculateSthElse(param1,param2,param3);
...

look good?

I could introduce parameter objects, but those objects would only be used as params to some method, so it would look like this

...
calculateSth(calculateSthObject);
calculateSthElse(calculateSthElseObject);
...

or I could put everything in one big object and make it

...
calculateSth(calculateObject);
calculateSthElse(calculateObject);
...

however in that solution I would have to pull out everything that is needed in private methods on the beginning of the method and set at the end, and it would be a lot of harder to find out what values are used in private methods. Around half of variables are needed as output.

How would you do it?

P.S. Calculations are not trivial, so doing things like

calculateObject.setMagicValue4((calculateObject.getMagicValue() * calculateObject.getMagicValue2() / calculateObject.getMagicValue3()) 

would only make it hard to read.

From stackoverflow
  • I would spend whatever time necessary to make sure I understand what the algorithm actually does. Then I'd try to find out how I'd really solve the problem if I could choose, which would probably involve a number of classes & concepts. Then I'd try to introduce these concepts one by one into the existing code, making sure to get proper test coverage for each concept as I introduce it.

CollectionViewSource Filter not refreshed when Source is changed

Hi,

I have a WPF ListView bound to a CollectionViewSource. The source of that is bound to a property, which can change if the user selects an option.

When the list view source is updated due to a property changed event, everything updates correctly, but the view is not refreshed to take into account any changes in the CollectionViewSource filter.

If I attach a handler to the Changed event that the Source property is bound to I can refresh the view, but this is still the old view, as the binding has not updated the list yet.

Is there a decent way to make the view refresh and re-evaluate the filters when the source changes?

Cheers

From stackoverflow
  • Are you changing the actual collection instance assigned to the CollectionViewSource.Source, or are you just firing PropertyChanged on the property that it's bound to?

    If the Source property is set, the filter should be recalled for every item in the new source collection, so I'm thinking something else is happening. Have you tried setting Source manually instead of using a binding and seeing if you still get your behavior?

    Edit:

    Are you using CollectionViewSource.View.Filter property, or the CollectionViewSource.Filter event? The CollectionView will get blown away when you set a new Source, so if you had a Filter set on the CollectionView it won't be there anymore.

    Steve : Yes, I am changing the collection, and the items in the list view are updated reflecting the new collection. However filter is not re-evaluated. Doing it manually didn't help: ((CollectionViewSource)this.Resources["logEntryViewSource"]).Source = _application.CurrentLog.Entries.ObservableCollection

What to do with multiple projects depending on the same source?

This is something I've come across twice in the past month and I'm not even certain how to phrase this as a Google query.

I'm actually using SVN for all of this, but it seems like this should be a general versioning problem.

We have two projects and one of them is dependent on some of the other's code. Due to API issues, it is not pragmatic to have some form of linkage between the products (and I don't want to have to configure all of the non-coders' machines to make this work).

I would imagine that if I put a copy of the shared code into the directory structure, I will end up overwriting all of the config files that SVN uses. This would mean that the version in the dependent project's directories would no longer be able to update.

Ex:

Project #1 needs to use the class MyExampleClass, however, MyExampleClass is something which is defined as a part of and needed by Project #2.

From stackoverflow
  • Look into svn:externals

  • Put all shared files in a separate folder in either one of the projects or in a separate one. Then use externals to reference that folder. Mixing files from different places in the same folder is a bad idea.

  • svn:externals will allow you to bring in files at a directory level. Like:

    Proj1\
      File1
      File2
    
    Proj2\
      File3
      File4
    

    Then in Proj2 you can svn:externals Proj1, and end up with:

    Proj2\
      Proj1\
        File1
        File2
      File3
      File4
    

    Now if you want to have files from 2 projects in 1 folder like:

    Proj2\
      File1 <- from Proj1
      File2 <- from Proj1
      File3
      File4
    

    Then I don't think SVN supports that.

    However, I have worked with other source control tools that would let you "link" a single file from one project to another anywhere you want (Borland StarTeam in particular).

    Wim Coenen : svn:externals supports file linking in svn 1.6. 1.6 RC3 is currently already available. The nightly build of tortoiseSVN is built against svn 1.6.
    rally25rs : @wcoenen: thanks for the info! looks like we are still on SVN 1.5.5 here... no wonder it wasn't working for me :)
  • We've used svn:externals pointing to shared code in practice for a few years now. We have had some interesting problems with it that you should probably consider before using it though. Here is the structure that we have:

    root
    +---common
    |   +---lib1
    |   |   \---trunk
    |   |       +---include
    |   |       \---src
    |   \---lib2
    |       \---trunk
    |           +---include
    |           \---src
    +---proj1
    |   \---trunk
    |       +---include
    |       |   \---common
    |       \---src
    |           \---common
    \---proj2
        \---trunk
            +---include
            |   \---common
            \---src
                \---common
    

    The common directories in both include and src in a project contain external definitions that pull in the common libraries:

    c:\dev> svn pget -v svn:externals proj1\trunk\src\common
    Properties on 'c:\dev\proj1\trunk\src\common':
      svn:externals : lib1 http://.../common/lib1/trunk/src
                      lib2 http://.../common/lib2/trunk/src
    

    The problem that we've run into is multifaceted but related to tagging and branching our source as the projects change throughout time. The externals definition that I showed above has a few pretty severe problems if you want to have reproducible builds:

    1. It refers to a dynamic target - trunk.
    2. It doesn't refer to an explicit revision.

    When you branch using svn copy, the externals are copied verbatim since they are really just properties attached to the object. Some of the other svn commands (commit, checkout, and export) actually interpret the properties. When you tag a project, you really want to preserve the state of the project for all time. This means that you have to "pin" the externals to a specific revision so you need to change the externals definition to explicit refer to the revision that you want (e.g., "lib1 -r42 http://.../common/lib1/trunk/src"). This solves one facet of the problem.

    If you have to maintain multiple incompatible branches of the common code, then you have to specify which branch you want explicitly along with (possibly) the revision.

    Needless to say, this can be a bit of a headache. Luckily someone out there in Subversion land write the svncopy.pl script that automates some of this mess. We are still (and have been) struggling with some of the difficulties supporting this in a field deployed product with a bunch of shared code and a mandate of three different versions in the field at any time.

    If you do go down this route, then be sure to consider how you will maintain the linkages as the projects grow and change. We've found that a little bit of time thinking about a process will go a long way here.

  • Externals, but do be aware of this issue:

    http://stackoverflow.com/questions/683716/subversion-update-externals-to-a-date

How would you write an Upsert for LINQ to SQL?

So I'd like to write a generic Upsert function for LINQ to SQL and I'm having some trouble conceptualizing how to do it. I'd like it to work something like this:

var db = new DataContext();
db.Customers.UpsertOnSubmit(customer);

So it would have to be generic in some way and I guess and extension method on Table. I've been able to get this far in determining the primary key of the underlying table:

var context = source.Context;
var table = context.Mapping.GetTable(source.GetType());
var primaryMember = table.RowType.DataMembers.SingleOrDefault(m => m.IsPrimaryKey);

I'm assuming it will be necessary to have this to compose a query to tell if the item is in the DB already or not but I don't really know what to do with it at this point.

From stackoverflow
  • I do something similar, but with a different approach. Every entity implements IEntity. One of the properties of IEntity is a state if the object is new or existing. I then implement that for each entity, like:

    public EntityState EntityState
    {
        get
        {
            if (_Id > 0)
                return EntityState.Exisiting;
            else
                return EntityState.New;
        }
    }
    

    Then, a generic Upsert could be (on a generic repository type class):

    public virtual void Upsert<Ta>(Ta entity)
        where Ta: class
    {
        if (!(entity is IEntity))
            throw new Exception("T must be of type IEntity");
    
        if (((IEntity)entity).EntityState == EntityState.Exisiting)
            GetTable<Ta>().Attach(entity, true);
        else
            GetTable<Ta>().InsertOnSubmit(entity);
    }
    
    private System.Data.Linq.Table<Ta> GetTable<Ta>()
        where Ta: class
    {
        return _dataContext.Context.GetTable<Ta>();
    }
    

    If your attaching from another datacontext, also make sure you have a timestamp on your objects.

Silverlight 3: Navigation page always aligns left!

Hello,

I have a silly question. How to center the content of navigation pages???

I've been following this tutorial: http://silverlight.net/learn/learnvideo.aspx?video=187319 (source code on page)

I tried setting the horizontalalignment to center on every damn thing possible but the content always ends up top left.

Anyone else have this problem?

From stackoverflow
  • OK found my mistake. I was using HorizontalAlignment=Stretch instead of HorizontalContentAlignment=Stretch === – vidalsasoon (0 secs ago) [remove this comment]

jQuery mouseover issues

Hey everyone,

I have image, and on mouse over it fades in two arrows (on the left and right sides), then on mouseout it fades those arrows out. My problem is that when the user hovers over the arrows, the image considers it a mouseout (since the arrows float above the image), and fades the arrows out, causing an infinate loop of fading in/out until you move the mouse away. What is the best way to prevent the arrows from fading out when the mouse hovers over them? I've tried one method, which you'll see below, but that hasen't been working out...

Here's some code:

$(".arrow").mouseover(function() {
 overArrow = 1;
 $("#debug_oar").text(1)
})

$(".arrow").mouseout(function() { 
 overArrow = 0;
 $("#debug_oar").text(0)
})

$("#image").mouseout(function() {
 $(".arrow").fadeOut(250,function() { $(".arrow").remove();})
})

$("#image").mouseover(function() {
 if(overArrow == 0) {
  $("#image").after("<div class='arrow' id='lArrow' style='display:none;position:absolute;'>&larr;</div><div class='arrow' id='rArrow' style='display:none;position:absolute;'>&rarr;</div>")
  // Define variables...
  aWidth = $("#lArrow").width();
  aHeight = $("#lArrow").height();
  height = $("#image").height()
  width = $("#image").width()
  pos = $("#image").position();
  // Calculate positioning
  nHeight = height/2
  y = pos.top + nHeight - (aHeight/2)
  // Position the left arrow
  $("#lArrow").css("top",y)
  $("#lArrow").css("left",pos.left+10)
  // Now the right...
  $("#rArrow").css("top",y)
  $("#rArrow").css("left",pos.left+width-aWidth-20)
  // Display 'em
  $(".arrow").fadeIn(250);
  // Debug info
  $("#debug_x").text(pos.left)
  $("#debug_y").text(y)
  $("#debug_height").text(height)
 }
})

Thanks

For those who are interested in the final code:

$("#image").mouseenter(function() {
 $("#image").append("<div class='arrow' id='lArrow' style='display:none;position:absolute;'>&larr;</div><div class='arrow' id='rArrow' style='display:none;position:absolute;'>&rarr;</div>")
 // Define variables...
 aWidth = $("#lArrow").width();
 aHeight = $("#lArrow").height();
 height = $("#image").height()
 width = $("#image").width()
 pos = $("#image").position();
 // Calculate positioning
 nHeight = height/2
 y = pos.top + nHeight - (aHeight/2)
 // Position the left arrow
 $("#lArrow").css("top",y)
 $("#lArrow").css("left",pos.left+10)
 // Now the right...
 $("#rArrow").css("top",y)
 $("#rArrow").css("left",pos.left+width-aWidth-20)
 // Display 'em
 $(".arrow").fadeIn(250);
 // Debug info
 $("#debug_x").text(pos.left)
 $("#debug_y").text(y)
 $("#debug_height").text(height)
});
$("#image").mouseleave(function() {
 $(".arrow").fadeOut(250,function() { $(".arrow").remove();})
})
From stackoverflow
  • Try to call stopPropagation() on your events, or use an alternative event that stops propagation by default like, mouseenter, mouseleave or hover(over, out).

    Vestonian : This worked, I made it so the image was surrounded by a div, then the arrows were appended to the content on the div, then I used mouseneter and mouseleave. Thanks!
  • The arrows probably aren't children of the image, so hovering them is leaving the image.

    You want to wrap the image and the arrows with a wrapper element, and put the hover() call on that element instead. That way hovering over either the image or arrows triggers it.

    Alternately, learn how to use event delegation. Attach hover() to a higher up element that happens to encompass them both, and then check the target of the hover every time. If it matches either the image or the arrows, trigger the appropriate animation on the arrows.

    This is a good tutorial explaining event delegation in jQuery and how to use it.

    Finally, just for simplicity, use hover() rather than mouseover()/mouseout(). It captures your intent much more clearly. The first function you pass to hover() gets applied on mouseover, the second gets applied on mouseout.

  • Probably the simplest solution would be to put the fade-out transition handler on body and the fade-in as you have it now.

    This way the mouseout event that you're getting wont cause anything to happen, and only when you mouse away from the whole area will it fade out.

SQL: Delete rows from two tables

I have two tables. Those tables have two relation between them.

Table 1
   * ID_XPTO (PK)
   * Detail

Table 2
   * ID_XPTO (FK) (PK)
   * ID_XPTO2 (FK) (PK)

Those two relations exists. Table 1 -< Table2 Table 1 -< Table2

My question is that I need to delete some row in table 1. I'm doing on that way

declare @table Table (xptoTable2 int)
insert into @table
     select ID_XPTO2
     from Table2 
     where ID_XPTO = @ID_XPTO

delete from Table2
where ID_XPTO = @ID_XPTO

delete from Table
where ID_XPTO in (select xptoTable2from @table)

I know that I could use ON DELETE SET NULL on table2. On that way I could then search for all rows with null value on ID_XPTO2 and delete them, but DBA does not wants to use it.

Is there some better solution to do this process?

From stackoverflow
  • Use ON DELETE CASCADE. It'll automatically delete referencing rows.

    rpf : Because DBA does not want to use it. I don't konw why :S
    Mehrdad Afshari : You said DBA doesn't want SET NULL. I thought he might allow this :) As SET NULL is not the best way to go anyway.
  • Why don't you use ON DELETE CASCASE?

    DROP TABLE t_f
    DROP TABLE t_m
    CREATE TABLE t_m (id INT NOT NULL IDENTITY PRIMARY KEY , value VARCHAR(50))
    CREATE TABLE t_f (id INT NOT NULL IDENTITY PRIMARY KEY, m INT, CONSTRAINT fk_m FOREIGN KEY (m) REFERENCES t_m(id) ON DELETE CASCADE)
    INSERT INTO t_m (value) VALUES ('test')
    INSERT INTO t_f (m) VALUES (1)
    DELETE FROM t_m
    SELECT * FROM t_m
    SELECT * FROM t_f
    
    id           value
    ------------ ------
    0 rows selected
    
    id           m
    ------------ ------
    0 rows selected
    
    rpf : Because DBA does not want to use it. I don't konw why :S
  • Two methods I know of:

    1. You could use ON DELETE CASCADE

    2. Write your SQL to clean up after itself ie:

       DECLARE @DetailCriteria ...
      
      
       SET @DetailCriteria = '....'
      
      
       BEGIN TRAN
       -- First clear the Table2 of any child records
          DELETE FROM Table2 
          WHERE 
            ID_XPTO IN (SELECT ID_XPTO FROM Table1 WHERE Detail = @DetailCriteria)
            OR ID_XPTO2 IN (SELECT ID_XPTO FROM Table1 WHERE Detail = @DetailCriteria)
      
      
       -- Next clear Table2 (which will delete fine because you've followed the referential chain)
          DELETE FROM Table1 WHERE Detail = @DetailCriteria
      
      
       -- commit if you're happy (should check @@ERROR first)
       COMMIT
      
  • You have these options:

    • Delete in two statements, as you are doing now. Delete from Table2 first.

    • Delete from two tables in one statement, if your brand of database supports multi-table DELETE syntax (e.g. MySQL). This is not standard SQL, but it is handy.

    • Use cascading referential integrity constraints (I understand your DBA has nixed this option).

    • Write a trigger BEFORE DELETE on Table1, to delete or set NULL any reference in Table2. Check with your DBA to see if this is any more acceptable than the cascading RI constraints.

    Finally, I would advise talking to your DBA and asking the same question you asked here. Find out what solution he/she would prefer you to use. Folks on StackOverflow can answer technical questions, but it sounds like you are dealing with an IT policy question.

Code coverage in unit testing

This is about .NET libraries (DLLs).

What are the options for measuring code that is covered by unit test cases? Is it actually worth the efforts (measuring the code coverage)? I wonder it might be too easy to cover 70% of code and almost impossible to go beyond 90%.

[EDIT] Another interesting question (put up by "E Rolnicki") is: What is considered a reasonable coverage %?

From stackoverflow
  • NCover will help show you coverage. Coverage is incredibly useful, unfortunately it can obviously be gamed. If you have bad developers covering code just to get the %age up, yes, it will ultimately be useless and hide uncovered areas. Once you fire those people you can fix it and get back to useful information. Setting coverage goals that are unattainable is a sure-fire way to get bad coverage.

    E Rolnicki : what is considered a reasonable coverage %?
    Nick Veys : Whatever you feel comfortable with really. 70% should be doable but some areas you'll want 100%, others you won't care much at all. Knowledge about your codebase is what you're going for ultimately, so after a few runs of the coverage tool you get a feel for what you want out of it.
  • I haven't used it personally, but one of my co-workers swears by nCover (http://www.ncover.com/).

    As far as coverage goes, in Ruby at least, 70% is easy, 90% is doable, and 100% is seldom a possibility.

  • NCover (both the commercial one and the open source one with the same name) and the code coverage tool in Visual Studio are pretty much your main tools in the MS world.

    Code coverage is a reverse metric. It doesn't really show you what code is adequately tested. Like Nick mentioned, you can have test that cover but don't really test much. Code coverage instead tells you what area of your code have absolutely no tests. From there, you can decide if it makes sense to write tests for this code.

    In general, I think you should do code coverage since it doesn't take much effort to set up and it at least give you more info about your code than what you had before.

    I agree that getting that last fraction of code is probably the toughest and there may be a point where the ROI on it just doesn't make sense.

    wowest : Exactly what I wanted to say.
  • If you are doing Test Driven Development your code should be hitting at least 70% without trying. Some areas you just can't or is pointless to have test coverage, thats where NoCoverage attributes for NCover come in handy (you can mark classes to be excluded by code coverage).

    Code coverage shouldn't be adhered to religiously, it should simply be a useful way to give you hints at areas you have missed with testing. It should be your friend, not Nazi-like!

  • There are two things to take into account when looking at Code Coverage.

    1. Code coverage is a battle of diminishing returns: beyond a certain point each additional percentage yields less value. Some code (like core libraries) should be 100% covered whereas UI code/interaction can be very difficult to cover.
    2. Code coverage is a metric that can be deceiving: 100% code coverage does not equate to a fully tested application.

    Take for example this snippet:

    if (a == 2)
    {
        do_A_Stuff();
    }
    
    if (b == 3)
    {
        do_B_Stuff();
    }
    

    Run a test where a = 2 and a second test where b = 3. That's 100% code coverage. But what happens when with a test where a = 2 & b = 3? These are "Variable Relationships" and can lead to overconfidence in coverage metrics.

  • Visual Studio Team System Developer Edition includes code coverage. This can be included when running unit tests.

    Open up the .testrunconfig and select which assemblies you want data for.

How to embed image in html and send html as email by msdb.dbo.sp_send_dbmail?

I can use msdb.dbo.sp_send_dbmail to send out email in html format. It is very nice for text only in terms of format. For example:

EXEC msdb.dbo.sp_send_dbmail
  @recipients = @p_recipients,
  @subject = @v_subject,
  @body=@emailHTML, 
  @body_format = 'HTML';

However, if I want to include images such as trend generated from data on my SQL server, and to embed them in the html (@emailHTML), what html tag should I use?

If I use [img] tag, then I need to set the src attribute. The image generated are saved in my local SQL server's hard disk. I could place them in IS server web page area. But all those web servers are intranet accessible but not outside of my work.

Is there any way to embed image in the email? How I can set html to embed images?

I am using Microsoft SQL server 2005. I prefer to msdb.dbo.sp_send_dbmail to send out reports out as email. I have much control of html format. If there is no way to that, I may have to send images as attachment files.

From stackoverflow
  • I think I got the answer:

    EXEC msdb.dbo.sp_send_dbmail
       @recipients = 'myemail@someemail.com',
       @subject = 'test',
       @file_attachments = 'C:\MyFolder\Test\Google.gif;C:\MyFolder\Test\Yahoo.gif',
       @body=N'<p>Image Test</p><img src="Google.gif" /><p>See image there?</p>
            <img src="Yaoo.gif" /><p>Yahoo!</p>', 
       @body_format = 'HTML';
    

    basically, add the image as attachment, and the src attribute contains just the image file name, no any path is needed. If more than one image files are needed, just use ";" to separate them.

    I send email to my outlook email and it works. Try it to my yahoo email....

    David.Chu.ca : typo for src="Yahoo.gif". It was broken image. Now it works.
  • image comes through as attachment in gmail and yahoo...any ideas?

How to get started with svn:externals?

I'm looking for a succinct and well-written tutorial on using svn:externals.

I already know how to use them in a basic way myself, but I want a good article that I can link to when answering questions like this one that come up recently:

http://stackoverflow.com/questions/662898/what-to-do-with-multiple-projects-depend-on-the-same-source/662905

I'd do it myself, but I don't use them often enough to want to stick my neck out and write a tutorial on it. Google was surprisingly unhelpful with this topic.

From stackoverflow

Html Element properties in c#

Hi all, How do it create/access my own properties for elements in C# that I will use in JS. and how do I access properties that are avaiable in Html but don't appear to be exposed in the c# set like the border property for tables I know I can do it with styles and classes, but it seems like a limp around as opposed to the most robust way to do it. Thanks in advance.

From stackoverflow
  • The Attributes property of the WebControl base class is what you're looking for. Example:

    MyControl.Attributes["myattr"] = "examplevalue";
    
    Praesagus : Thank you so much. That was it. Any suggestions for the table attribute: border. C# wants to set it to 0 by default and changing the attribute only adds another border=1.
    Ken Browning : There's a BorderWidth property on the Table class. Have you explored that property?
  • The most robust as well as the most correct way of doing it is though CssClass property and a class defined inside a .css file.

    One reason to this is that if you have a designer who only touches CSS, they can change styles without touching your C# source code. If you don't have a designated CSS person, layer separation is still beneficial - just imagine looking though source code to change border color.

    Separating CSS, source code and JS as much as you can is the advisable practice.

    Praesagus : Thanks for the larger picture considerations. I will keep that in mind as I choose which to put where.
    : I'd like to add that you can always use JS and DOM to modify CSS styles inline on the fly.

ADO Entity Framework creating unwanted Entity Key

Hi, I need to use tables from a DB which I cannot alter (using linked server). So part of my schema is a view on these table and I cannot create an FK in my DB.

When I come to creating the association in ADO.NET Entity Framework I am getting problems because a second column on the table from the external DB has an index on it and the EF is creating an Entity Key for it (it's the name descr of the record - I think they just wanted to speed the ordering on it).

When I take the Entity Key off this column in the EF entity it complains that I need it because the underlying table has a key on it. If I leave it in I cannot map it onto anything in the table mapping of EF.

Does anyone know what I should do please?

From stackoverflow
  • I have a similar situation occurring to me.

    I have a situation where I have a table that has a pk made up of two fields. The first one is related to the pk of my primary table, and the second is an identifier.

    Example

    Table1 SupplierId pk

    Table2 SupplierId pk AddressType pk (a value like main, shipping, mailing, etc) Line1 line2 etc

    Because the AddressType is part of the pk, I am unable to do a number of things. In this scenario, I am trying to create inherited entities like MainAddress, ShippingAddress and MailingAddress.

    I have been unable to assign a condition to the table since AddressType is a key field.

    Anyone know a way around this?

  • You will have to edit the XML and remove the column from the key. Find the <EntityType> tag in the <edmx:StorageModels> section (SSDL content). Delete any <PropertyRef> in the <Key> that is not actually part of the primary key.

    Once you do this, you can set "Entity Key" on the corresponding scalar property in the designer to false, and EF won't get mad. You will also not be asked to map this column in associations anymore.

How to test if a C# Hashtable contains a specific key/value pair?

I'm storing a bunch of supposedly-unique item IDs as a key and the file locations as the value in a hash table while traversing a table. While I am running through it, I need to make sure that they key/location pair is unique or throw an error message. I have the hashtable set up and am loading the values, but am not sure what to test:

Hashtable check_for_duplicates = new HashTable();
foreach (object item in items)
{
    if (check_for_duplicates.ContainsKey(item["ItemID"]) &&
        //what goes here?  Would be contains item["Path"] as the value for the key)
    {
        //throw error
    }
}
From stackoverflow
  • If you were using Dictionary instead, the TryGetValue method would help. I don't think there is a really better way for the pretty much deprecated Hashtable class.

    object value;
    if (dic.TryGetValue("key", out value) && value == thisValue)
      // found duplicate
    
    Brian : which namespace should I use to enable Dictionaries? They're not in any of the default namespaces.
    Mike Rosenblum : The Dictionary class (http://msdn.microsoft.com/en-us/library/xfhwa508.aspx) was introduced in .NET 2.0 and is located in the System.Collections.Generic namespace.
  • if (check_for_duplicates.ContainsKey(item["ItemID"]) &&
        check_for_duplicates[item["ItemID"]] == item["Path"])
    {
        //throw error
    }
    
  • Try this:

    Hashtable check_for_duplicates = new HashTable();
    foreach (object item in items)
    {
        if (check_for_duplicates.ContainsKey(item["ItemID"]) &&
            check_for_duplicates[item["ItemID"]].Equals(item["Path"]))
        {
            //throw error
        }
    }
    

    Also, if you're using .NET 2.0 or higher, consider using Generics, like this:

    List<Item> items; // Filled somewhere else
    
    // Filters out duplicates, but won't throw an error like you want.
    HashSet<Item> dupeCheck = new HashSet<Item>(items); 
    
    items = dupeCheck.ToList();
    

    Actually, I just checked, and it looks like HashSet is .NET 3.5 only. A Dictionary would be more appropriate for 2.0:

    Dictionary<int, string> dupeCheck = new Dictionary<int, string>();
    
    foreach(Item item in items) {
        if(dupeCheck.ContainsKey(item.ItemID) && 
           dupeCheck[item.ItemID].Equals(item.Path)) {
            // throw error
        }
        else {
            dupeCheck[item.ItemID] = item.Path;
        }    
    }
    
    Brian : Found a minor error: check_for_duplicates[item["ItemID"]] == item["Path"] should be check_for_duplicates[item["ItemID"]].Equals(Item["Path"])
    mquander : Regarding HashSet; you can compare the count in the resulting set with the count of the original collection if you want to figure out whether there were dupes. (Warning: not performant.)
  • It kinda depends what the items array is... you'll want something like:

    check_for_duplicates.ContainsValue(item["Path"]);
    

    Assuming that the item is some form of lookup. Really you need to be casting item, or using a type system to actually access any values via an index.

    Ian : My bad... I hadn't realised that there was an AND clause in the original question.
  • ContainsKey is the best method.

    If you aren't forced to use .NET 1.1 I would use the Dictionary introduced in .NET 2.0.

    It is much better than a Hashtable from performance and is strongly typed.

    Dictionary<string, int> betterThanAHash = new Dictionary<string, int>();
    
    betterThanAHash.ContainsKey("MyKey");
    
    Brian : Which namespace should I use for this? Dictionary isn't in the default namespaces I'm using.
    Svish : then add the namespace for Dictionary =)
    TheMissingLINQ : @Brian - System.Collections.Generic
  • Hashtable check_for_duplicates = new HashTable();
    
    foreach (object item in items) 
    {
        if (check_for_duplicates.ContainsKey(item["ItemID"]) && check_for_duplicates[item["ItemID"]] == item["Path"])
        {
            //throw error
        } 
    }
    

    I do believe this is what you're looking for.

    EDIT - Looks like I was beaten to the punch :P

    Brian : Almost, your example doesn't quite work (missing a ] on: check_for_duplicates[item["ItemID"]
    Matt Grande : That's correct, it's been fixed.
  • Why not use a Dictionary instead?

    That will throw an ArgumentException if you try to Add a key that already exists in the Dictionary.

    That way you can catch the duplicate at the moment it is being added, rather than performing a check_for_duplicates test later.

  • You didn't say what version of things you were using. Is there a reason you must use a Hashtable vs a HashSet? You would have no need to check for duplicates if your data structure didn't allow them. See also:

    http://www.vcskicks.com/csharp_data_structures2.html

    Other than that, the question of how to accomplish the same thing in Hashtable has already been answered here. I'm just pointing out that you don't need to do all the pathological checking if you forbid it in the first place.

Keeping DNS at one place. What possibilities are out there?

I host a couple of websites for some customers. So basically what they do is change their A Record to my server's IP and everything works fine.

Now I want to upgrade my server and of course my clients have to change their A Record.

So what I want for the future is to keep my flexibility in choice of servers without having to annoy my customers with changing their A Record.

Basically what would be great is a nice service which is reliable and gives me the possibility to redirect the A Record to a new server. Also I thought of using a second server as a load balancer.

Anyone good experiences with either of these solutions or even has other ideas?

From stackoverflow
  • What you're looking for is either a dynamic DNS service (there are many) or to just run your own DNS servers and have your clients make your servers authoritative for their domains.

    Stefan Koenig : Yes I've looked at them, but I'm not sure if they are suitable for my problem. I just want to manage the A record, the MX records are managed by my customers themselves. So are they still suitable? It seems that dyn DNS services demand a change of the NS for the domain. That's not what I want.
    chaos : You should be able to tell a dynamic DNS service to manage a subdomain like Dave Webb describes. If they demand to take over your entire domain, stay far away.
  • Your customers could create a www subdomain instead of an www A record and the delegate this domain to your DNS server. You could then decide to which webserver this should send users' HTTP requests to with a default * A record.

    Stefan Koenig : I'm actually trying this solution with one domain, still waiting for propagation.
  • We host our own DNS servers and instruct the client to point the domain at them. There really is no other good way to handle it. As far as load balancing, get a load balancer (dedicated hardware or a software appliance) and set it up to route info from the public IPs on the load balancer to the private IPs of the server. This gives you the flexibility to move servers around internally without ever changing the public IPs.

    Stefan Koenig : Yeah I probably will choose the load balancing solution, it gives me the highest flexibility. Any experience of the workload for a software load balancer and up to 50.000 visitors a month?
    Chris : We run Zeus ZXTM on a linux box with 4GB of ram and 2x dual core CPUs. Haven't had any issues on sites serving up to 1 million visitors a month.
  • Create a www.myhostingservice.example in your zone and ask your clients to make www.clientdomain.example an alias (CNAME) pointing to www.myhostingservice.example.