Sunday, March 27, 2011

Error when creating assembly in SQL: MSG 33009

I'm attempting to load a dll into MSSQL with:

USE dbname
GO

CREATE ASSEMBLY foo 
FROM 'C:\foo\foo.dll'
WITH PERMISSION_SET = UNSAFE
GO

And I'm getting an error that states:

Msg 33009, Level 16, State 2, Line 2
The database owner SID recorded in the master database differs from the database owner 
SID recorded in database 'dbname'. You should correct this situation by resetting the 
owner of database 'dbname' using the ALTER AUTHORIZATION statement.

MSDN really isn't telling me any more about the error than the error tells itself.

I've looked all over the internet and have come to the conclusion that only thing anyone has ever done to avoid this is to:

use dbname
go
EXEC dbo.sp_changedbowner @loginame = N'sa', @map = false

But is changing the owner really the only way to avoid this error? Why do I have to do this, is there another way? I'd like some more information about this error before I go in and blindly change the owner.

From stackoverflow
  • I have had exactly the same problem and only solution for me was to change the owner, then change it back again.

    The problem is that users are both per-database and per-server. What's happened is that the per-database user has a username that is the same as a per-server user, however they SIDs don't match, so it thinks it could be a different person.

All change comments to perforce branch between 2 labels? (including merges)

Our perforce admin limits "max-row" scans so that my first idea of running the following will not work:

  1. All changes including integrates into a branch at particular label time 1
  2. All changes including integrates into a branch at particular earlier label time 2
  3. Subtract time 2 changes from time 1 to get the new changes with comments.

Is there an alternative way of getting the same result without having such a massive query(when perforce contains 7yrs of history and -i triggers a scan back to the dawn of history)

Based on Gregs comments added this comment:

Basically the point is to see what bugs got fixed in a particular release branch between 2 labels(or more commonly, some old label and today). I wish to simplify(speedup) way too complex script that we currently have which looks at changes that went into a release branch, it follows files that went into them at least 2 branches up in order to printout all the changeset comments from the original change(the interim merge comments tend to just say something like merge123 etc instead of description of the actual change comments, so we need to walk up the tree to the original comment as well), script finally outputs something like below(we put quality center IDs into changeset comments):

  1. qualityCenterId123 - fixed some bug
  2. in gui qcId124 - fixed some other
  3. bug qcId125 - fixed some other bug
  4. merge123

UPDATE based on comments:

Problem with Toby's approach is that most of the changes into the code branch came via integrations, -i would include those change, but as stated that explodes the query to such a degree that due the load on perforce server our admin won't allow it to run. So this is why I am looking for an alternative approach to get the same result.

From stackoverflow
  • Won't a normal Label-diff do what you want?

    • From P4V, Tools->Diff. Select the two labels
    • From P4Win, right click label, select diff files in 2 labels
    • From command line, p4 diff2 //codeline/...@label1 //codeline/...@label2

    Or am I missing exactly what you are after?

    Further suggestion after Ville's comment on the above

    If you are only after the info per changelist, rather than per file, then try "p4 interchanges" from the command line. This will give you just the summary of what changes in one branch have not happened in another, and you can supply a revision range to limit it to the labels you need.

    Do "p4 help interchanges" from command line for details.

    Unfortunately the interchanges command is not yet exposed in P4V or P4Win.

    Ville M : I would need the comments for each change, using the diff 2 labels approach doesn't print comments for each change.
    Greg Whitfield : I've edited my answer to give you another possibility. If that does not work for you, could you perhaps type a few lines of what you expect the output to look like?
    Ville M : Thanks,I did just start looking into the interchanges command,very interesting.Your question;I wish to simplify(speedup) script that looks at changes into branch,follows up to 2 branches to printout changeset comments,outputs: qualityCenterId123 - fixed some bug in gui qcId124 - fixed some other bug
    Ville M : I been playing with the p4 interchanges command, but one thing I don't quite understands is if I give this: p4 interchanges -l //depot/path/...@label_old //depot/path/... it will say all already integrated, even though diff between head and that label will show difference?Why is that?
  • Are your labels more than simply the most recent changelist when they were created? Eg did you really need to record specific files in client workspaces? If not you can very easily compare the two changelists closest to labels.

    Say the closest change to your first label date is 23000 and your closes change to your second label date is 25000 then

    p4 changes //depot/PATHTOMYCODE/...@23000,@25000

    will give you all changes to your code path between these two changelists.

  • Toby Allen's Answer is the best approach if your labels are simple changelists.

    If the labels are more complicated, then I think you need to look at all the files in each label and where their versions differ, find the changelist where the version changed.

    You can get the file and version lists with:

    p4 fstat -Of //...@MyLabel
    

    EDIT:

    Consider two complex labels:

    VERSION_A:
     //depot/file_A.cpp#4
     //depot/file_B.cpp#7
     //depot/file_C.cpp#1
    
    VERSION_B:
     //depot/file_A.cpp#6
     //depot/file_B.cpp#5
     //depot/file_C.cpp#4
    

    In this example, the labels do not describe a particular changelist, the head change for each file may be different.

    If you can have labels like this, then you can run the p4 fstat command on each label and then find the differences. In this example, file_A.cpp has changed twice and file_C.cpp has changed 3 times. file_B.cpp is older in the second label, so it can be ignored.

    So now you need to look at the changes that involved these versions:

    file_A.cpp#5
    file_A.cpp#6
    file_C.cpp#2
    file_C.cpp#3
    file_C.cpp#4
    

    Those changes can be retrieved with p4 filelog, so you want to run something like this:

    p4 filelog file_A.cpp#6
    p4 filelog file_C.cpp#4
    

    Then you need to remove any duplicates and any history for earlier versions.

    Like I said, you only need this if you have messy lables. If there's any way to make your labels represent changelists, you should use Toby Allen's answer.

    Ville M : Added UPDATE to question based on your answers first part. Not sure if I got understand how the fstat will help me get the commit comments between 2 labels?
    Mark James : The fstat will get you the version numbers for each file in a label. My assumption was that your labels were not simply the most recent changelist when they were created. If the labels are complicated you need to look at the individual file versions, then find the diffs. I'll add details above.
    Mark James : Thinking about it more, you're still going to need to run the filelog with the -i flag, so my example probably doesn't help you.
    Ville M : Mark, I think you are right, the whole problem really is the -i flag and the huge query it triggers. I maybe just need to change my process, and run the interchanges command before doing the merge, and it should more or less give me what I need. However I'll leave the question open for a bit more.
  • I can't see an easy answer to this, but do have a couple more suggestions that perhaps may help point in the right direction.

    1. Persuade your admin to raise the maxscan rows limit. If he is nervous that this will lead to problems with the whole user base, just get him to add you to a new user group (e.g. "Scripting"), and set the limits for just that group. That will have the effect that only members of that group can use the upper limits, and you can then negotiate for suitable times to run the script. You could even do it overnight.
    2. Have a look at the P4 admin guide and see if any of the hints on scripting will help - e.g. maybe a tighter view on the data will limit the query enough to not break the maxscanrows limits.
    3. How's your SQL? You may be able to construct an efficient query using the P4Report tool.
    4. Try asking the question on the Perforce mailing list. It's a very active list that has a lot of very experienced people who are very helpful. See this link for the sign-up page. There's a good chance that they will suggest some good approaches.
    5. Probably too late for yoru existing labels, but consider using the job system to track work. Perforce has inbuilt query tools to track what jobs have made it into different branches. It does require a working-practice change for your team, however.

    Sorry I can't provide a more specific answer.

    Ville M : I think this is the answer I was looking for, "no easy answer", good to know, will follow up on the list etc as you suggested, thanks.
    Greg Whitfield : Thanks. It's one of those tricky problems where you already have a solution that does the main problem of generating a report, but it does not fit within the constraints you are under. Good luck.

Class Library of Constants--Best Practice?

I was using .Net Reflector on an Internal App to try and understand what the previous Dev was doing and also to learn. I have never had actual instruction on how to develop Apps so I take from where I can (Hooray Stack Overflow). That being said I found something that has me confused. A class Library called WinConstant containing the below code.

Here are my actual question:

  1. What possible use could this be?

  2. What value is there in storing a bunch of constant's in a class library?

  3. Is this considered a "Best Practice"?

Thoughts and guidance appreciated!


Public Class clsConstant
    Public Const cAccess As String = "Access"
    Public Const cAddress As String = "Address"
    Public Const cCancel As String = "Cancel"
    Public Const cCity As String = "City"
    Public Const cClear As String = "Clear"
    Public Const cClickOnMessage As String = "Click on any row in top pane to see the detail fields in the bottom pane."
    Public Const cClientID As String = "ClientID"
    Public Const cColon As String = ": "
    Public Const cComma As String = ","
    Public Const cContactID As String = "ContactID"
    Public Const cCounty As String = "County"
    Public Const cDash As String = "-"
    Public Const cDelete As String = "Delete"
    Public Const cDepartment As String = "Department"
    Public Const cError As String = "Error"
    Public Const cExec As String = "Exec"
    Public Const cFalse As String = "False"
    Public Const cFavorite As String = "Favorite"
    Public Const cFederal As String = "Federal"
    Public Const cFriday As String = "Friday"
    Public Const cfrmMain As String = "frmMain"
    Public Const cfrmModuleLogin As String = "frmModuleLogin"
    Public Const cfrmModuleSplash As String = "frmModuleSplash"
    Public Const cHelp As String = "Help"
    Public Const cHint As String = "Hint"
    Public Const cImagePath As String = "../../image"
    Public Const cIn As String = "In"
    Public Const cInformation As String = "Information"
    Public Const cInitialScreenID As String = "InitialScreenID"
    Public Const cInsert As String = "Insert"
    Public Const cJuvenileID As String = "JuvenileID"
    Public Const cLetter As String = "Letter"
    Public Const cManual As String = "Manual"
    Public Const cMasterID As String = "MasterID"
    Public Const cModuleID As String = "ModuleID"
    Public Const cModuleName As String = "ModuleName"
    Public Const cMonday As String = "Monday"
    Public Const cName As String = "Name"
    Public Const cNegative As String = "Negative"
     _
    Public Shared ReadOnly cNLowDate As DateTime = New DateTime(&H851055320574000)
     _
    Public Shared ReadOnly cNullDate As DateTime = New DateTime
    Public Const cNullDateString As String = "12:00:00 AM"
    Public Const cOfficeIDDefault As String = "01"
    Public Const cOne As Integer = 1
    Public Const cOut As String = "Out"
    Public Const cPopUp As String = "PopUp"
    Public Const cPositive As String = "Positive"
    Public Const cProcess As String = "Process"
    Public Const cProviderID As String = "ProviderID"
    Public Const cQuestion As String = "Question"
    Public Const cRead As String = "Read"
    Public Const cReferralID As String = "ReferralID"
    Public Const cReminder As String = "Reminder"
    Public Const cReport As String = "Report"
    Public Const cReportEngine As String = "ReportEngine"
    Public Const cReportEnginePath As String = "ReportEnginePath"
    Public Const cReportingServices As String = "ReportingServices"
    Public Const cReportServer As String = "ReportServer"
    Public Const cReportService As String = "ReportService"
    Public Const cReportServiceLocal As String = "ReportServiceLocal"
    Public Const cReportServiceServer As String = "ReportServiceServer"
    Public Const cSaturday As String = "Saturday"
    Public Const cSearch As String = "Search"
    Public Const cSelect As String = "Select"
    Public Const cSpace As String = " "
    Public Const cSQLLoginError As String = "SQL Server login/password invalid"
    Public Const cStart As String = "Select a module"
    Public Const cState As String = "State"
    Public Const cSubjectID As String = "SubjectID"
    Public Const cSunday As String = "Sunday"
    Public Const cThursday As String = "Thursday"
    Public Const cTooltipCancel As String = "Reset form data values back to before all manual changes."
    Public Const cTooltipClear As String = "Clears all data entry fields prior to an Insert"
    Public Const cTooltipClient As String = "Display a Client popup window."
    Public Const cTooltipClose As String = "Close this form"
    Public Const cTooltipDelete As String = "Delete the current record being displayed, no undo possible."
    Public Const cTooltipExe As String = "Initiate a batch process."
    Public Const cTooltipInsert As String = "Insert a brand new record"
    Public Const cTooltipSearch As String = "Perform a Search for values entered."
    Public Const cTooltipSelect As String = "Perform a Select for values entered."
    Public Const cTooltipUpdate As String = "Update an existing record"
    Public Const cTrue As String = "True"
    Public Const cTuesday As String = "Tuesday"
    Public Const cUnderscore As String = "____________________________________________________________"
    Public Const cUpdate As String = "Update"
    Public Const cWarning As String = "Warning"
    Public Const cWeb As String = "Web"
    Public Const cWednesday As String = "Wednesday"
    Public Const cWorkerID As String = "WorkerID"
    Public Const cZero As Integer = 0
    Public Shared strLongDate As String() = DateAndTime.Now.ToLongDateString.Split(New Char() { ","c })
    Public Shared strModuleMainStatusStripFormID As String = Nothing
End Class
From stackoverflow
  • The only explanation I can think of is as some sort of message library - but that doesn't hold for 90% of the entries.

    For instance this is just silly:

     Public Const cInsert As String = "Insert"
    

    This smells very very bad.

  • A constant is something that never changes, for example

    Public Const NumberOne as Int = 1
    

    So this is my first remark: some of the stuff you summed up isn't really const.

    Another downside is that using the const keyword creates a binary depency. This means that you will have to rebuild the assembly which uses your constants.dll. You can't just replace it. This is caused by the way consts work: the compiler replaces the name with the value at compile time.

    A solution to this problem is to use ReadOnly instead of Const.

    I don't really think it is a good practice to create such a library. I wouldn't allow my team to create one anyway...

  • Separating out literals from the rest of the code is a good idea.

    What's odd is that these should largely be resources rather than constant strings. Then they could be easily localized if needed, or replaced/updated without a recompile of the entire app.

    Some of those shoudln't even be resources: cUnderscore for example looks like it's using text to create a visual effect- generally a bad idea.

    In your predecessor's defense, I consider this code preferable to finding the same constants scattered throughout the source, as it will make refactoring to resources a little simpler.

  • This looks like a developer had a coding standard that says: Don't use string literals in code, and dutifully separated out every single constant, whether it made sense or not.

    For example, there is probably some element in there where the number 1 was needed in the code and instead of using DefaultNumberOfLineItems or some other descriptive constant, they used NumberOne = 1;

    A best practice would be to keep constants descriptive and close to their point of use. There's nothing wrong with a static class of related constants that have some type of meaning and are related to each other.

    For example, there is a system that I've worked on that tags attributes with unique keys. These keys are gathered in a static class with descriptive names on the attributes, the actually keys are generated by an automated system

    public static class AttributeIDs
    {
       public const string Name = "UniqueGeneratedKeyForNameAttribute";
       public const string Description ="UnqiueGeneratedKeyForDescriptionAttribute";
       ... etc.
    }
    

    In practice this is used like

    MyAccess.GetValueForAttribute(AttributeIDs.Name);
    

    which gets all the related constants together.

  • There are possible useful scenarios for such a class. Generally, things like "magic number" or "magic strings" are turned in to constants and placed in a static (shared) class. The reason for this is to isolate those "magic" values to a single location and allow them to be referenced by a meaningful name. (Typically used for numeric values.) In the case of string values, it helps ensure that things are referenced by the same value each time. The best example of this is string keys in to your app.config file.

    That being said, constants should be used for something that doesn't change (or so rarely changes that it is effectively constant). For the most part, strings that have the potential to change (or need to be localized) should be stored as resources in a .resx file.

    Joel Coehoorn : resources > constants, and if anything easier
    Scott Dorman : @Joel Coehoorn: I think it depends on how they are being used. For instance, ensuring a consistent name for a bunch of DllImport attributes. I can't use a resource for the name of the DLL but I can use a constant.
  • There is nothing wrong with having a class library of constants. Constants are a good practice in general. Enums are all over the place in .NET after all, and they are just grouped numeric constants. Having a dependency on an assembly of constants is no different than any other dependency. Understanding the purpose of the constants is more about the app's logic. My guess is that these constants enable verbose logging without filling the app with a bunch of string literals.

  • Back in the days of coding windows applications in c, there were similar files #included in windows which consisted of long lists of #defines creating constants. Various c applications emulated this approach in their own files. The "class" seems to be a "transliteration" of this "c-ism". The fundamental principle of Object Oriented Design is to mix code and data into related functional units: objects. As jfullerton wrote:

    From a programming point of view, object-orientation involves program objects, encapsulation, inheritance, and polymorphism. The conceptual objects are modeled in the program code. Encapsulation keeps an object's data and methods that use the data together as part of the object.

    So clearly, this constant list does not conform to OO practices but is a throw back to the old days.

    To answer your questions:

    1. -- This class holds constants, that is it
    2. -- The old developer probably did this because that was what he was used to doing
    3. -- It's not current best practices.

    Naturally, if this is part of your application, you can't just throw this away. Rather, this is something to refactor over time, assuming you use current best practices of Test Driven Development and Refactoring

    EnocNRoll : This code reminds of .h files too.
    Scott Dorman : There are still very valid reasons for using classes consisting of constants. Not everything in the world of programming can/will conform to pure OO ideals. See some of the other responses for more explanations. This isn't neccessarily something that should be refactored away.
    StingyJack : This kind of list is nice when you work with a alot of datasets/datatables and need to reference columns or tables from many parts of an app.
    Refracted Paladin : @stingyJack- can you elaborate because that is what we have. How is it nice or is that a second post.. :)
  • It really looks like a naive implementation of a string table.

    The values are no magic strings and "easy" system-wide changes. I would argue that a resources file is easier to both implement and maintain.

    Is your colleague's implementation a best practice? I'd say no. Using string tables, it is, especially if you need internationalization in your app.

  • Putting string constants into a separate class is a best practice in many situations, however, this is a poor example. A better way would be to create a StringConstants namespace and then organize the strings so that related string constants are organized into separate classes. This is just a poor implementation of a good idea.

    If globalization is a concern, then (as others have pointed out), strings should be kept in resource files.

Selecting an index in a QListView

This might be a stupid question, but I can't for the life of me figure out how to select the row of a given index in a QListView.

QAbstractItemView , QListView's parent has a setCurrentIndex(const QModelIndex &index). The problem is, I can't construct a QModelIndex with the row number I want since the row and column field of the QModelIndex has no mutators.

QTableView, which also inherits from QAbstractItemView has a selectRow(int row) function, why in the seven hells doesn't the QListView have this?

Good ol' windows forms has the SelectedIndex property on it's listviews.

From stackoverflow
  • You construct the QModelIndex by using the createIndex(int row, int column) function of the model you gave to the view. QModelIndexes should only be used once, and must be created by the factory in the model.

    Nailer : Thanks! I thought it had to be something like this!
  • This should help you get started

    QModelIndex index = model->createIndex( row, column );
    if ( index.isValid() )
        model->selectionModel()->select( index, QItemSelectionModel::Select );
    
    Nailer : Figured it out 10 days ago, but thanks for the effort =)

What is wrong with this SQLite query?

I'm creating an AIR application which connects to a SQLite database. The database balks at this insert statement, and I simply cannot figure out why. There is no error, it simply does not move on.

INSERT INTO Attendee (AttendeeId,ShowCode,LastName,FirstName,Company,Address,Address2,City,State,ZipCode,Country,Phone,Fax,Email,BuyNum,PrimaryBusiness,Business,Employees,Job,Value,Volume,SpouseBusiness,DateAdded,ConstructionWorkType,UserPurchaser,DecisionMaker,SafetyProducts,NearFuturePurchase,RepContact,Winner) VALUES('39610','111111','SMITH','JIM','COMPANY','1000 ROAD STREET','','PITTSBURGH','PA','15219','','5555555555','5555555555','PERSON@EXAMPLE.NET','','','0000000000000000000','','','','','0?','Fri Jan 30 14:20:08 GMT-0500 2009','other','neither','no','gas_detection','no','no','winner')

I know that the app can connect to the database, because it creates the table just fine. Here's the schema for the table for your reference:

CREATE TABLE Attendee (AttendeeId TEXT PRIMARY KEY,ShowCode TEXT,LastName TEXT,FirstName TEXT,Company TEXT,Address TEXT,Address2 TEXT,City TEXT,State TEXT,ZipCode TEXT,Country TEXT,Phone TEXT,Fax TEXT,Email TEXT,BuyNum TEXT,PrimaryBusiness TEXT,Business TEXT,Employees TEXT,Job TEXT,Value TEXT,Volume TEXT,SpouseBusiness TEXT,DateAdded TEXT,ConstructionWorkType TEXT,UserPurchaser TEXT,DecisionMaker TEXT,SafetyProducts TEXT,NearFuturePurchase TEXT,RepContact TEXT, Winner TEXT)

There is a good chance that there is an error in the INSERT statement, because if I try to execute it on a separate Adobe Air SQLite admin program, it throws an ambiguous error (#3115).

Thanks for any insight you might have.

EDIT:

For those wondering, if I make a simple table, thus:

CREATE TABLE Attendee (AttendeeId TEXT)

And try to insert, thus:

INSERT INTO Attendee (AttendeeId) VALUES('09283A')

I still get error #3115.

From stackoverflow
  • insert into Attendee ("AttendeeId", "ShowCode", "LastName", "FirstName", "Company", "Address", "Address2", "City", "State", "ZipCode", "Country", "Phone", "Fax", "Email", "BuyNum", "PrimaryBusiness", "Business", "Employees", "Job", "Value", "Volume", "SpouseBusiness", "DateAdded", "ConstructionWorkType", "UserPurchaser", "DecisionMaker", "SafetyProducts", "NearFuturePurchase", "RepContact", "Winner") values ('39610', '111111', 'SMITH', 'JIM', 'COMPANY', '1000 ROAD STREET', '', 'PITTSBURGH', 'PA', '15219', '', '5555555555', '5555555555', 'PERSON@EXAMPLE.NET', '', '', '0000000000000000000', '', '', '', '', '0?', 'Fri Jan 30 14:20:08 GMT-0500 2009', 'other', 'neither', 'no', 'gas_detection', 'no', 'no', 'winner');
    

    Try this one, it was computer generated.

    : If I put double quotes around the values, also, then this works.
  • Have you tried running smaller statements? Like, a really simple INSERT on a really simple table?

    Have you tried quoting the column names? Maybe one of them is a reserved word.

    : If I make a simple table, thus: CREATE TABLE Attendee (AttendeeId TEXT) And try to insert, thus: INSERT INTO Attendee (AttendeeID) VALUES('09283A') I still get error #3115.
    : Sorry, in the INSERT statement, I did spell AttendeeId with the same case as it was in the CREATE TABLE statement on the "Id" portion.
  • Apparently you need to use double quotes rather than single quotes.

How does the location of a script tag in a page affect a javascript function that is defined in it?

I read that you should define your javascript functions in the <head> tag, but how does the location of the <script> (whether in the <head>, <body>, or any other tag) affect a javascript function.

Specifically, I would like to know how does it affect the scope of the function and where you can call it from?

From stackoverflow
  • If you have an inline script (outside functions) located before functions it may call, you may get an error because they may not be not available yet. Not saying it is always going to happen, just that it may depending on browser type or version.

  • It doesn't. Most programming framework scatter scripts all throughout the page. I've only rarely seen problems because of that (and only from older browsers).

    Simon_Weaver : hey! who's downvoting this without an explanation. perhaps someone who had lots of problems ??
  • One of the aspects of placement is performance. See this fine article within the YSlow discussion for why it's sometimes recommended you put them at the bottom of the document.

    As for issues of scope, the usual visibility rules for Javascript (vars defined inside or outside of functions, local, global, closures, etc.) are not affected so far as I know.

  • If your script refers to an ID on the page and the page has not been rendered (i.e. script is before HTML, or your script is executed with onload, rather then the DOM is ready) you can also get an error.

  • Javascript's scoping rules are similar to perl - you can call any function at the current or any higher scope level. The only restriction is that the function has to be defined at the time you call it. The position in the source is irrelevant - only the position in time matters.

    You should avoid putting scripts in the <head> if possible as it slows down page display (see the link Alan posted).

  • The normal rules of play still stand; don't use stuff before it's defined. :)

    Also, take note that the 'put everything at the bottom' advice isn't the only rule in the book - in some cases it may not be feasible and in other cases it may make more sense to put the script elsewhere.

    The main reason for putting a script at the bottom of a document is for performance, scripts, unlike other HTTP requests, do not load in parallel, meaning they'll slow down the loading of the rest of your page. Another reason for putting scripts at the bottom is so you don't have to use any 'DOM ready' functions. Since the script tag is below all elements the DOM will be ready for manipulation!

    EDIT: Read this: http://developer.yahoo.com/performance/rules.html#js_bottom

    Mark Rogers : adding at the bottom affects seo as well, I'm told
    Alan : Excellent point, m4bwav
  • Telling people to add <SCRIPT> only in the head sounds like a reasonable thing to do, but as others have said there are many reasons why this isn't recommended or even practical - mainly speed and the way that HTML pages are generated dynamically.

    This is what the HTML 4 spec says :

    The SCRIPT element places a script within a document. This element may appear any number of times in the HEAD or BODY of an HTML document.

    And some sample HTML. Doesn't it look pretty all formatted here :)

    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
         "http://www.w3.org/TR/html4/strict.dtd">
    <HTML>
    <HEAD>
    <TITLE>A document with SCRIPT</TITLE>
    <META http-equiv="Content-Script-Type" content="text/tcl">
    <SCRIPT type="text/vbscript" src="http://someplace.com/progs/vbcalc">
    </SCRIPT>
    </HEAD>
    <BODY>
    <SCRIPT type="text/javascript">
    ...some JavaScript...
    </SCRIPT>
    </BODY>
    </HTML>
    

    And something to look forward to in HTML 5 :

    New async attribute in <SCRIPT> :

    Note: There are ways [sic] a script can be executed:

    The async attribute is "true": The script will be executed asynchrously with the rest of the page, so the script will be executed while the page continues the parsing.

    The async attribute is "false", but the defer attribute is "true": The script will be executed when the page is finished with the parsing.

    Simon_Weaver : @m4bwav - i mainly wanted to make the additional point that hadn't been made that that it is OK to put SCRIPT in BODY for anyone who thought they shouldn't. in addition the Yahoo 'performance rules' article mentioned is well worth reading and covers the issue thoroughly

Unit Testing my Email Service

How do I write a unit test for a method that use an external service like System.Net.Mail?

Do I just check a return value to see if there were any errors sending the email and assume if there were none that it was sent successfully?

From stackoverflow
  • Create a mock object for the mail service.

    In short this will mean that you write a class that can replace the behavior of sending a mail. This way you can create different situations for your test very easy: a succeeded mail, a failed one, etc.

    A nice and short introduction: http://www.programmersheaven.com/user/pheaven/blog/217-An-Introduction-to-Mock-Objects/

    Another one, with code samples: http://blogs.clearscreen.com/ragc/archive/2004/08/31/395.aspx

  • It depends on the level of testing you want. If you are more concerned with testing the method, and the emailing is just a small part, then you might consider mocking the email sending service out. Gerrie's link is good.

    However, if you actually want to test the mail sending, here are some ideas.

    • Use SmtpDeliveryLocation to output the email to a folder instead of a mail server. You can then check the folder.
    • Send the email, then query the remote POP server to see if it arrived.
    • Send and assume it just worked if ther are no exceptions. I can't recall if SmtpClient.Send is blocking or non blocking.

    Because these tests typically mean talking to external things, they'd be called integration tests rather than unit tests.

    But in general, I think you'd be more concerned with ensuring you got the formatting/email addresses correct, than whether or not Microsoft implemented SmtpClient correctly. So unit testing and mocks make more sense there.

    Gerrie Schenck : Agreed, it depends on what you want to test: the mail server or the business logic which is responsible in generating a mail.

Using a Continuous Integration Server for Home Development

As a follow up to one of my previous posts 'Using Version Control for Home Development', I am now asking about opinions as regards using a Build Server for a pet project.

Lately I have been reading about this 'Build Servers' concept, and I have looked at applications such as Maven and CruiseControl.Net.

And thus I ask, how feasible is it to use something like CruiseControl.Net for my home pet projects?

Reason I ask is that I think that these Build Servers are mainly aimed for team projects...but then again, I'm still very new to this Automated Build process.

Keep in mind that most of the time, these pet projects are only handled by one man, not a team.

So should I look more into this concept for the sake of using at home, or should I just get some practice on it for experience's sake?

[EDIT]

Although I thank you all for your answers as regards alternatives to CC.Net and such, no one has yet really tackled the issue of whether it is feasible or not to implement a Build System for Home Development ?

From stackoverflow
  • i installed CC.net months ago it took me a whole night to configure it and create the xml configurations and i have no regrets about it, it smoothly integrates with SVN, Nunit, Nant or Msbuild. you should try it only if it is to gain experience

  • Take a look at Hudson its very easy. You need to just deploy in Tomcat or any other servlet you use container. Once up every configuration can be done using browser. Hudson supports maven, ant etc and supports all the major SCMs. I have been using hudson for the past one year and not faced any trouble.

  • As an alternative to CC.Net, I recommend you to give a look to TeamCity, is really easy to setup and get it up and running.

    Related question:

  • CC.NET is very feasible, in fact with the free cost and wide ranging supported actions. Not to mention the fact that since you can get the source code you can modify it to you needs I could not imagine anything better. I read the other compliants about how difficult it is to set up, but to be honest I had my first simple TFS/VS2005 project up within an hour. Just remember if you run into any issues or snags CC.NET has a pretty active google groups for Users and Devs who would be willing to help you through any gotchas.

  • It is completely feasible to implement a build server for your home projects. I've implemented CC.Net for my home projects myself and it is pretty easy to do so, even for the first time. I would say the learning curve (depending on your experience) is less than a day to get your first project up and building, though there is always the longer tail on that curve as you dig into some of the more interesting details.

    The question to me is more one of the motivation for continuous integration on these projects. If you are using "Home Project" synonomous with "Throw-away Project", there probably isn't much point in going to the trouble of CI unless you are using it specifically as a CI learning excercise.

    However, assuming these are not throw-away projects you are talking about, I've found (in addition to the more obvious benefits of automation) that implementing CI helps reduce the overhead involved in coming back to a project you've walked away from for some period of time. Of course, unit tests are the most valuable asset in this regard, but the combination of unit tests with an automated build/deployment process really allows you to focus on the new and changed requirements when you come back to a project after having set it down for a while.

    Additionally, as mghie points out in the comments to this answer, "CI will give even greater benefit for home projects if they build upon each other, so changes in one project could cause the build to break in others."

    My advice, just do it once so you have a clearer picture of what is involved and the benefits you might reap and drawbacks you might incur. Then make the decision for yourself as to whether or not it is worth continuing to do. Like I said, the learning curve is reasonably low so the investment you will have to make in just giving it a try shouldn't be the reason not to.

    Nutshell: Feasible - Yes, Desirable for home projects - Quite Possibly, Worth further investigation - Definitely, Investment - Relatively low

    mghie : +1, a very nice answer, inasmuch as one can answer the question for the OP. I would probably add that CI will give even greater benefit for home projects if they build upon each other, so changes in one project could cause the build to break in others.
  • I love CC.NET and I'm a big fan of CI, but I have to ask: with only one developer on the project, what integration scenarios exist? Wouldn't you just build the entire project in Visual Studio, negating the need for CI?

    Alex : CI does not just mean compiling. You could also 1. Delete out source directory and building fresh works out some dependency and linking issues. 2. Unit Tests, Static Analysis, Code Metrics 3. Automated UI test on multi platforms 4. things like D'oxygen 5. Publish or Installer on demand
    Robert S. : That's what I would use NAnt for.
    Alex : OK, well that is CI no matter what you use, so I don't see how building it in Visual Studio negates the need for CI
    Robert S. : Because CI is a continuously-running server that polls your source for changes and integrates them with everybody else's. One developer, one machine...why run a server for one guy? Is he going to save his code and wait for the CI process to run?

Is there a function to round a float in C or do I need to write my own?

Is there a function to round a float in C or do I need to write my own?

float conver= 45.592346543;

Would like to round actual value to one decimal place. conver = 45.6

Thanks.

From stackoverflow
  • #include "math.h" // can't input less than and greater than :)
    
    double round(double x);
    float roundf(float x);
    

    Don't forget to link with -lm. See also ceil(), floor() and trunc().

    Evan Teran : that round to the nearest integer...not what he asked for.
    João da Silva : that's right, I got ahead of myself. I up voted Eduard's answer.
  • Sure, you can use roundf(). If you want to round to one decimal, then you could do something like: roundf(10 * x) / 10

    Evan Teran : Nice, everyone else ignored the fact that the asker didn't ask to round to nearest integer. It should be noted that because of imprecision in floating point. When printing you will likely see "45.59999" given the example.
    Evan Teran : A nice more general solution would be: double f(double x, int decimal_points) { int n = pow(10, decimal_points); return roundf(n * x) / n; }
    Tommy : unresolved external symbol _roundf referenced in function _f have included math.h but it doesn't like roundf() I am in VS .NET, is foundf only for linux?
    Eduard - Gabriel Munteanu : Tommy, roundf() is defined in C99, so every compliant compiler should support it. Perhaps you're not linking with the math library?
    Evan Teran : I don't think the newer visual studio's made any effort to support C99.
    Evan Teran : You could just use this though: floor((10 * x) + 0.5) / 10;
  • Just to generalize Rob's answer a little, if you're not doing it on output, you can still use the same interface with sprintf().

    I think there is another way to do it, though. You can try ceil() and floor() to round up and down. A nice trick is to add 0.5, so anything over 0.5 rounds up but anything under it rounds down. ceil() and floor() only work on doubles though.

    EDIT: Also, for floats, you can use truncf() to truncate floats. The same +0.5 trick should work to do accurate rounding.

  • As Rob mentioned, you probably just want to print the float to 1 decimal place. In this case, you can do something like the following:

    #include <stdio.h>
    #include <stdlib.h>
    
    int main()
    {
      float conver = 45.592346543;
      printf("conver is %0.1f\n",conver);
      return 0;
    }
    

    If you want to actually round the stored value, that's a little more complicated. For one, your one-decimal-place representation will rarely have an exact analog in floating-point. If you just want to get as close as possible, something like this might do the trick:

    #include <stdio.h>
    #include <stdlib.h>
    #include <math.h>
    
    int main()
    {
      float conver = 45.592346543;
      printf("conver is %0.1f\n",conver);
    
      conver = conver*10.0f;
      conver = (conver > (floor(conver)+0.5f)) ? ceil(conver) : floor(conver);
      conver = conver/10.0f;
    
      //If you're using C99 or better, rather than ANSI C/C89/C90, the following will also work.
      //conver = roundf(conver*10.0f)/10.0f;
    
      printf("conver is now %f\n",conver);
      return 0;
    }
    

    I doubt this second example is what you're looking for, but I included it for completeness. If you do require representing your numbers in this way internally, and not just on output, consider using a fixed-point representation instead.

    Tommy : Rounds down seems to work OK but for example rounding 45.569346543; is 45.599998....or 45.5 with *1.0f....I'm closer thought, need to read floor and ceil again. thanks guys.
    Tommy : It is coming from floating point inaccuracy, I changed to double, works great. Thanks.

Prevent an element from "capturing" the mouse using jQuery?

I'm trying to resize an embedded object. The issue is that when the mouse hovers over the object, it takes "control" of the mouse, swallowing up movement events. The result being that you can expand the div containing the object, but when you try to shrink it, if the mouse enters the area of the object the resize halts.

Currently, I hide the object while moving. I'm wondering if there's a way to just prevent the object from capturing the mouse. Perhaps overlaying another element on top of it that prevents mouse events from reaching the embedded object?


using ghosting on the resize doesn't work for embedded objects, btw.


Adding a bounty, as I can't ever seem to get this working. To collect, simply do the following:

Provide a webpage with a PDF embedded in it, centered on the page. The pdf can't take up the entire page; make its width/height 50% the width of the browser window or something.

Use jQuery 1.2.6 to add resize to every side and corner of the pdf.

The pdf MUST NOT CAPTURE THE MOUSE and stop dragging WHEN SHRINKING THE PDF. That means when I click on the edge of the pdf and drag, when the mouse enters the display box of the pdf, the resize operation continues.

This must work in IE 7. Conditional CSS (if gte ie7 or whatever) hacks are fine.


Hmmm... I'm thinking it might be an issue with iframe...

    <div style="text-align:center; padding-top:50px;">
    <div id="doc" style="width:384px;height:512px;">
    <iframe id="docFrame" style="width: 100%; height: 100%;"
 src='http://www.ready.gov/america/_downloads/sampleplan.pdf'>
    </iframe></div></div>
    <div id="data"></div>
    <script type="text/javascript">
        $(document).ready(function() {
        var obj = $('#docFrame');
            $('#doc').resizable({handles:'all', resize: function(e, ui) {
                $('#data').html(ui.size.width + 'x' + ui.size.height);
                obj.attr({width: ui.size.width, height: ui.size.height});
            }});
        });
    </script>

This doesn't work. When your mouse strays into the iframe the resize operation stops.


There are some good answers; if the bounty runs out before I can get around to vetting them all I'll reinstate the bounty (same 150 points).

From stackoverflow
  • Overlay.

    One word answers prohibited, this sentence no verb.

    Will : Sorry, I'm a noob. Code sample is welcome, as googling jquery overlay doesn't do much for me.
    eyelidlessness : You can simply add a hidden overlay element which overlaps the object, is set to be 100% height/width, and toggle it during resize. I will post pseudocode as a separate answer if this is not enough information to go on.
    Will : If you can get it to work, and demonstrate it, take the bounty. I can't get a goddamn overlay to do jack shiznit.
  • I would answer overlay, but will provide actual code :P

    I call it "follower" instead of overlay and is used in my ThreeSixty plug-in for jQuery (kinda simple source code provided, you will understand reading what's happens to the "follower" div.

    http://www.mathieusavard.info/threesixty

  • Well I was utterly unable to find a XPS Document Viewer example or whatnot, but I was able to come up with this working sample. It doesn't use the overlay idea, but it's a pdf that you can resize...

    edit the thing that made this work without the overlay was the wmode param being set to transparent. I'm not really familiar with the details but it made it play nice on IE7. Also works on Firefox, Chrome, Safari and Opera.

    new edit having serious trouble getting it to work with frames. Some information I've found is not very encouraging. Is it impossible to have it with an <object>? Or an <object> inside the iframe?

    Will : Gimme a couple days to test this; the sample looks good with flash paper, but it might crap on xps/pdf.
    Will : Trying with an iframe with the source set to a doc...
    Paolo Bergantino : Let me know how it goes.
    Will : Updated the question with sample code. I hadn't considered it could be the iframe messing things up...
  • Perhaps SmartLook is an alternative. It's like those lightbox scripts but for embedded content such as pdfs.

    Will : Its interesting, however I can't use it in this project. Thanks for the info, tho.
  • Here is what works for me, though is does hide the iframe while resizing. Is that an issue for you?

    $(document).ready(function() {
        var obj = $('#docFrame');
        $('#doc').resizable(
        { 
            handles: 'all', 
            resize: function(e, ui) {
                $('#data').html(ui.size.width + 'x' + ui.size.height);
                obj.attr({ width: ui.size.width, height: ui.size.height });
            },
            start: function(e, ui) { $('#docFrame').hide(); },
            stop: function(e, ui) { $('#docFrame').show(); }
        });
    });
    
    Will : Undeleted because this might work. Why did you delete it?
    Jab : because he said "Currently, I hide the object while moving. I'm wondering if there's a way to just prevent the object from capturing the mouse." I didn't catch that comment about the current way he is doing it the first time I read it. :)
  • With IE you can call setCapture()/releaseCapture(), it worked great with iframes for me.

    With Firefox -- transparent overlay, as already suggested.

    Will : Gimme some code and I'll try it out.
  • Thanks to StackOverflow's new "Recent Activity" feature, I saw that you asked me to provide a code example. I did only minor testing (can't seem to get your code to inline the PDF in IE, presumably it requires a PDF plugin, which I don't have installed), so this may not work. But it's a start.

    <div style="text-align: center; padding-top: 50px;">
        <div id="doc" style="width: 384px; height: 512px; position: relative;">
            <div id="overlay" style="position: absolute; top: -5px; left: -5px;
                padding: 5px; width: 100%; height: 100%; background: red;
                opacity: 0.5; z-index: 1; display: none;"></div>
            <iframe id="docFrame" style="width: 100%; height: 100%; position: relative; z-index: 0;"
                src='http://www.ready.gov/america/_downloads/sampleplan.pdf'></iframe>
        </div>
    </div>
    <div id="data"></div>
    <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.js" type="text/javascript" charset="utf-8"></script>
    <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.5.3/jquery-ui.js" type="text/javascript" charset="utf-8"></script>
    <script type="text/javascript">
        $(document).ready(function() {
            var obj = $('#docFrame'), overlay = $('#overlay');
            $('#doc').resizable({
                handles: 'all',
                start: function() {
                    overlay.show();
                },
                resize: function(e, ui) {
                    $('#data').html(ui.size.width + 'x' + ui.size.height);
                    obj.attr({
                        width: ui.size.width,
                        height: ui.size.height
                    });
                },
                stop: function(e, ui) {
                    overlay.hide();
                }
            });
        });
    </script>
    
    Will : I'll test this out. Thanks. Also, you might want to look at the foxit plugin for PDFs. Decent replacement for adobe's without all the crap that goes along with it.

What is the most DRY way to get data out of my database?

I have to write an ASP.NET application that connects to our legacy IBM Universe Database and we are using a product called mv.net which allows us to connect, read, write, select, run server side programs, etc.

I want as little code repetition as possible but I also want as little data transfer as possible as well.

In order to open a connection we must first get a reference to the account using code such as:

mvAccount myAccount = new mvAccount(serverName, login);

Then we can read an item:

mvFile myInvoiceFile = myAccount.FileOpen("INVOICE");
mvItem myInvoiceRecord = myInvoiceFile.Read(invoiceID);

Then we we're done:

myAccount.Logout();

I have a Class for each module, so I may have INVOICE, PURCHASE_ORDER, RMA, REQ, SHIPMENT, and so on. Within INVOICE, I may need to access multiple tables such as CUSTOMER, INVOICE, TERMS, SHIPVIA, etc.

What I planned to do was create a class called TechDB which is the name of our database and put the code in there so in my INVOICE class I can just say:

TechDB connection = new TechDB();
mvItem myInvoiceRecord = connection.Read("INVOICE", invoiceID)

When I do this my TechDB class would open the connection, read the record, and then logout all in one step.

I think I'm heading down the right path but please let me know if not. Here are my problems with this:

  1. How do I return errors my INVOICE class? For example, errors could occur if we are unable to connect to the database, unable to open the file, unable to read the record.

  2. What if I then need to take some data from my INVOICE and then read the TERMS table. I'd hate to have to open a new connection to the database when I just opened one.

  3. Should I call the Dispose method on all classes that have this? For example, the mvAccount has a Dispose method. None of the documentation says to call it, but should I after the Logout()?

  4. Could I create a Dispose method on the TechDB class that does the myAccount.Logout()? That way the connection would remain open and I could close it from my INVOICE class when I was completely done with it?

Give me some opinions on the best way to handle this? My goal is a robust application that is easy to modify and as little code repitition as possible.

From stackoverflow
  • I would use closures, I think for C# you have Delegates. So something like:

    MyAccount.loginAndDo(servername, login, delegate(account){
       invoice = account.read("INVOICE");
       .
       .
       .
    });
    

    In loginAndDo, you would login, call the delegate, and then close the account.

    1. Custom exception classes
    2. One idea would be to construct batch requests, (a list of delegates).

    as per 3/4. In my case, all of my data access objects inherit from a class that holds a static reference to a connection. I'm hesitant to implement disconnect logic in Dispose because of the possibility that there is a power out or system crash or something and that connection isn't released.

  • You may want to take a look at an OleDB / ODBC connection. This is what we are using to connect to Universe.

    See: http://www-01.ibm.com/software/data/u2/middleware/

Which Expression product is for Silverlight developers?

Hi,

I'm not a graphics person, but I want to learn silverlight development.

Which expression product am I looking at here? Blend or studio or is vs.net 2008 all I need?

From stackoverflow
  • Blend is the product I've been using for silverlight. You can work with just visual studio 2008, but Bend can make the vector graphics much easier to make.

  • The studio as a whole is great and you can also do stuff with VS but you are probably looking for Expression Blend.

  • For the developer side of silverlight Visual Studio is all you need. For the designer side Expression Blend is what you will want.

    Basically the visusl studio designer is read only meaning you have to write all the xaml by hand, blend gives you the ability to drag drop components and generally make creating the UI easier. (When compared to hand coding XAML)

Inserting a new record pattern in SubSonic 3

I'm trying out the new SubSonic 3 preview, but I'm not sure about the patterns I should be using for the basic CRUD operations in my MVC project.

I'm trying to keep as much of my data logic in my models, so I added some static CRUD methods to each model's partial class.

For example, let's say I have a configuration table, which should only have a single record. So, my partial class may look something like this:

public partial class Configuration{
        // cut down; mocking hooks here IRL
        private static IRepository<Configuration> Table = new MyRepository<Configuration>();

        public static Configuration Retrieve()
        {
            var config = Table.GetAll().FirstOrDefault();
            if (config == null)
            {
                config = new Configuration();
                Table.Add(config);
            }
            return config;
        }

        public static void Update(Configuration target)
        {
            Table.Update(target);
        }
}

Currently, this doesn't work as the config table has an identity column for a primary key, and Add-ing a new record to the table throws the standard "Cannot insert explicit value for identity column" error. SubSonic 3 doesn't seem to generate classes that, upon new-ing them up, play nice with the rules of the database schema (i.e., no default values, no nullable primitives for values that are nullable in the database, etc).

Now, I can alter my table and pattern to get around these issues, but I'm wondering about when I cannot get around this issue--when I have to add a new record to the database and have an identity as my primary key.

I'm also wondering if this pattern is even correct or not. SubSonic gives you a number of different ways you can go about your repository business, so I'm not sure which one I should be using. I'd LIKE to use my models as MUCH as possible (otherwise why not just Linq to Sql?), so I don't want to use SubSonic's query building goodness when trying to CRUD my models.

What should I do here? Any advice on CRUD patterns for using SubSonic 3 in my MVC project would be welcome and +'d. Also welcome are links to websites that cover this subject for SubSonic 3 but that don't rank high in Google searches...


Asked Rob directly (link here). For my DB, at least, there's a showstopper bug in the generated code. Aaah, alpha software.


UPDATE

With the release of Subsonic3, can we have a little bump to reconsider this question?

From stackoverflow
  • First and foremost: the t4 templates are there for you to change as needed with SS3. That was the main idea with using T4 - I don't want to back you into my silliness :).

    To the question at hand - I think this might be a bug in our templates that refuses to stick a value into the PK field:

        ISqlQuery BuildInsertQuery(T item) {
            ITable tbl = _db.FindTable(typeof(T).Name);
            Insert query = null;
            if (tbl != null) {
                var hashed = item.ToDictionary();
                query = new Insert(_db.Provider).Into<T>(tbl); ;
                foreach (string key in hashed.Keys) {
                    IColumn col = tbl.GetColumn(key);
                    if (!col.IsPrimaryKey) {
                        query.Value(key, hashed[key]);
                    }
                }
            }
            return query;
        }
    

    In this, our check should actually be...

                    if (!col.IsPrimaryKey && !col.AutoIncrement) {
                        query.Value(key, hashed[key]);
                    }
    

    In this way, the non-identity will be inserted. But in reading your issue here, it sounds to me like you're not trying to insert into a non-identity.

    The email you sent me doesn't say anything about PKs as identity - your PK was a thing called "NAme" which is a string type and not and identity (auto-increment).

    I'm wondering about when I cannot get around this issue--when I have to add a new record to the database and have an identity as my primary key.

    This is what SubSonic assumes - that your PK is an IDENTITY column. If you ONLY have an IDENTITY column, we can't help you because this is a deadlocked table in that you can't insert any value into it, therefor you can't tick the IDENTITY column. Your only recourse at this point is to SET IDENTITY INSERT="off", which defeats the purpose.

    Hopefully this will answer your question? If I'm not getting it - can you do this for me:

    1. One sentence: what can't you do and what's the error
    2. What did you expect

    Thanks Will and I hope I'm not being thick.

    Will : Right, the example I gave you in the forums was different than this one (this, I believe, predated the forum question). In this example, it was attempting to insert the identity value, which your code fix does answer. But you're not answering the original question!
  • I seem to get the same question.

    When I insert a new new record into a table with an auto-incremented pk, the same exception is thrown. As I trace the code, I find the default value of the key of the Entity (generated from t4) is 0. Does that matter?

    Thanks.

  • Ok, so I'm not the only one hitting this. I haven't found a solution mentioned anywhere.

    Rob, the key value in my entities is also set to 0 when initialized... how can I resolve the problem?