Thursday, January 13, 2011

Small and medium business: user and workstation management with Debian Linux

I'm looking into building a more unified user and workstation management system and I thought it would be a good idea to ask others how they have solved the apparent issues.

I would use LDAP for user management and have the home directories mounted over NFS to the workstations, which is quite straight forward and which would also result in "roaming profiles".

But what would be a sort of best practice for managing the workstations? Management would in this case include updating, installing and removing packages, updating configuration files, dist-upgrade:s and so on. Preparing new workstations (semi-) automatically might also be of use.

I'm building everything on Debian and I'd like to do it "the Debian way". Please tell about your experiences, even if they'd be with other distributions.


  • You might be interested in using puppet or cfengine for managing your configurations.

    There's also expect to automate administrative tasks against many workstations

    From Maxwell
  • I'm managing ~70 Servers with cfengine2

    I would suggest, you let cfengine do the whole configuration management.

    You can also have a look at cron-apt for automatic installation of updates.

    From ThorstenS
  • Try and keep as many of the customizations as you can inside a "my-company-customizations" .deb, which you keep on an internal repository linked from sources.list. Then, you can make a policy change in one place (a newer version of the package) and then just apt-get update your machines.

    Config files are a special case - dpkg isn't great at keeping track of them if they are provided by two different packages (see this debian-devel thread). If you can't add an override config file without disturbing the packaged ones (and many Debian-packages utils let you drop files in a conf.d directory), then consider cfengine/puppet as Maxwell suggests.

    From crb
  • As far as user management goes, you'd want to look at doing LDAP authentication. NFS is a good solution for home directories. One small problem with NFS is that it becomes a single point of failure; if the NFS server dies, then all your workstations become completely useless. Any process that accesses a file on the NFS server will block until the NFS server returns.

    As far as managing the workstations, we use Puppet, which is incredibly useful. It allows you to describe in a declarative way how you want your workstations to look like and it reconfigures them to make sure they look the right way. You can create files, install packages, create users etc, and build up constructs to make higher level tasks. We have starts trialling using Puppet to do security updates. We're a little wary of automatically upgrading everything, because we don't want to restart important services, and we don't want to do everything by hand. More experience will show if this is a suitable approach.

    wazoox : LDAP user management is of course to be combined with krb5 for user/password management.
    Avery Payne : @wazoox, +1 for kerberos authentication. @David Pashley, +1 for a nice, thorough description of a functional setup.
    pjc50 : Why not use NIS? Installing and maintaining it is pretty simple, compared to LDAP and/or krb5.
    David Pashley : It's not as flexible. It can't store as much information. It can't be used for interoperating with other operating systems. It can't be replicated as well LDAP. LDAP allows you to split your administration. They're just the few I can think of off the top of my head.

Remote Desktop TSWEB port

I'm thinking about using the TSWEB addon for IIS as a remote access solution for some users, but I have a question: does it keep everything routed on port 80, does the activeX control it relies on open a new connection on the normal remote desktop port, or does something else happen?

One of the things I'm hoping to gain from this (not the only thing) is to not have to open the extra firewall ports to the outside from that location.

  • Port 3389 will still be used for RDP when using the TSWEB add-on.

  • If you're using server 2008 you should look into TS gateway. It allows connecting into your organisation through port 443 (an SSL tunnel) so it's nice and secure, and just uses 'standard' ports. It behaves just like normal RDP. I believe the associated TS web interface will also use port 443.

    There are a couple of minor drawbacks though, the main one being clients must have version 6.1 of the RDP client install on their machine. This means they need one of the following:

    • XP SP3
    • XP SP2 with the RDP 6.1 update installed
    • Vista SP1

    If you can get around this, we've found TS gateway to be excellent!

    Speedimon : I just want to add, that in case of Windows XP, ActiveX component of Web Access seems not to work until you delete some registry keys under HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Ext\Settings: {4eb89ff4-7f78-4a0f-8b8d-2bf02e94e4b2} {7390f3d8-0439-4c05-91e3-cf5cb290c3d0} You need to fully delete those nodes, then Web Access ActiveX starts working.
    From CapBBeard

Copy eml file to Exchange outbox folder to send mail

Hello, I think I remember that once I found browsing on the internet that you could copy a .eml file to the outbox folder of exchange in order to send it.

Now it would be nice to have this behavior in an application I'm developing but I can't find any information related.

Am I right or I am starting to imagine weird things?

  • This technote explains it. You can write the eml files to the pickup folder.

Better to have hot spare for Data stores or Transaction Logs in Exchange 2003


I have two RAID 10 arrays, one for the mail stores and one for the transaction logs. I have one drive left that can be a hot spare to either the mail stores or the transaction logs array, but not both. Would is be better to have the hot spare for one over the other?

  • Losing either will cause Exchange to shutdown, but in the long-term losing the database is probably the worst of the two to lose. I'd assign the hot-spare to the database (and would think about looking for a RAID controller that supports a global hot-spare... >smile<)

Is there an Emergency Rescue Disc (ERD) that allows for slipstreaming SATA drivers?

I have a computer at work that was baselined with the DISA Gold Disc, to include disabling the built-in admin account and setting its password to something unknown. So, while trying to use the repair console, I can't use it without the password (another Gold Disc setting).

However, my ERD does not have the proper SATA drivers to "see" this drive. And, since I know I'm going to need this soft of disc in the future anyway, I thought I'd ask: Does anyone know of an ER disc that allows one to slipstream the drivers, a la nLite?

Configuring SQL server 2005 enterprise edition

What are the basic settings required for running SQL server 2005 enterprise edition in a local computer. When i take program from start menu it shows only sql server configuration manager, surface area manager, error reporting and a command prompt. There is no SQL server launching shortcut. How i get it?

  • Well, I think SQL Server already runs on your machine. What do you mean of "SQL Server launching shortcut"? There is no such standalone thing. SQL Server components run as services. Check for them running using Start -> Run -> services.msc.

    If you mean some GUI for managing the server, it is called SQL Server Management Studio, it can usually be found on the second disk, if you used CDs for installation. Try to reinstall the server and don't forget to check Management Studio while installation (it can be found under some other node, like 'User tools' or something, I don't remember for sure now).

    From Speedimon
  • My recollection of the SQL Server installation is that it's irritatingly easy to not install the SQL Server Management Studio. It's an option that isn't ticked by default.

    On my system the shortcut for Management Studio points to "C:\Program Files\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE\SqlWb.exe" so you can easily see if it's installed.


Windows Vista “Could not open input file”

I get this strange error when I try to run a php file from anywhere in the system:

Could not open input file: drush.php.

...except for its own directory. When I try invoke it from its own directory, no problem -- everything works!

Is there a permissions issue here? I looked under security tab in the properties for the file, but every user was given all the permissions that are available.

So I don't get why Windows is not able to open this file from any other directory except where it is located.

  • It could be a "working directory" issue. Have you tried making a shortcut to it on the desktop, then right-click, properties, and change "start in" to the folder where the php file is?

    picardo : yeah, that worked. so what do you think i should do? keep using the shortcut rather than the batch file? -- i dont know if i mentioned that the file is being invoked by drush.bat.
    Adam Brand : You could use the shortcut or edit the batch file to do "cd c:\yourpath" before it runs that command. You might also want to try t the PATH comment below to see if that does the trick.
    From Adam Brand
  • Try adding the path of the PHP bin folder to your %PATH% environmental variable

    Usually in C:\Program Files\PHP\bin (or C:\Program Files (x86)\PHP\bin) or C:\PHP\bin

    picardo : actually i tried that. it didn't work. same error.

VHD File Structure - Repair Corrupted VHD

I have a corrupt VHD that I need to get data off of. It is a Windows 2003 x32 Hyper-V Virtual Machine (NTFS). I have a nearly identical version of that VM without the data on it that works.

Using a hex editor, I tried inserting the old vhd into the working one after a few pages (randomly trying to compare), but I can't seem to get it to work.

It would be ideal to know the VHD file structure, so that I could know where the FAT is, where the VM header is, etc, so I can insert the bytes intelligently.

Anyone have any experience with this?

  • I'm not sure about repairing the actual disk or the details of the VHD container format but if you haven't tried mounting outside of the Virtual Server environment proper, that may be worth a try.

    Apparently, WinImage can mount VHD containers:

    Adam Brand : Yeah, I've tried dice. If I'm going to get this back I need to actually modify the bytes...unless there is some VHD repair program out there (I haven't found any).
    damorg : Sorry I can't be of more've probably seen this but here's a MS technet page with links to the general spec document on the VHD format just in case you haven't: Not sure if that's got enough detail to help. Good luck and please share what you learn.
    Adam Brand : Perfect! That page links to this document: which describes the spec in detail (including bytes).
    From damorg
  • You could try opening it with VirtualBox. Or get a VMware product and use VMware vCenter Converter to convert it. There's a chance one of these products might compensate for the errors.

    Another option is to use partition/hard drive recovery software. Even though the hard drive is virtual, it should respond the same way to recovery software.

    Kara Marfia : VMWare Converter has salvaged a couple of corrupted VHDs around the office over the last year.
    From Joseph
  • @Adam Brand: Were you able to fix the format of your drive? I have a vhd that got corrupted by VirtualBox and need to get it fixed. The drive only has data on it so don't need to boot up any OS. I've got a Hex editor and have been toying around with it but no luck so far. I got an event log saying the disk Footer Signature is invalid but it looks fine in Hex compared to a previous copy of the same disk. Any thoughts or direction would be awesome!

    Adam Brand : I haven't got around to is on my to-do list for some weekend. I would also read up on the filesystem of the drive itself (NTFS etc) to get an understanding of the underlying file structure.

What is iGPU frame buffer?

In the BIOS setup utility of ASUS M3N78-EM, there is an option to set iGPU Frame Buffer Size instead of Hybrid SLI (as has been stated in the manual). What might be the reason for this, and what does iGPU Frame Buffer Size relate to?

  • iGPU stands for integrated Graphics Processing Unit. iGPUs share system memory (RAM) with the other parts of your system. The Frame Buffer size setting allows you to state how much memory is available to the integrated GPU (this goes into much more depth, but that is the Reader's Digest abridged version).

    Now, this allows for change because the iGPU settings started gaining prominence with the advent of nVidia's Hybrid SLI technology, which will allow you to use an iGPU and a Discrete GPU (dGPU or video card) in an SLI format to boost performance or save power. They need to allow you to turn it on or off because this tech is only available (currently) on Windows Vista (and I would assume Windows 7), so people running another OS would probably want to change this setting.

    From RascalKing
  • The iGPU stands for Integrated Graphics Processing Unit. That setting controls the amount of memory you give to the integrated graphics on your motherboard. Typically this can be values from 32MB to 512MB depending on the board.

    I believe when Hybrid SLI is enabled which combines your iGPU with an external discrete GPU, the option to control the amount of memory is turned off since this is managed by the board.

Delete command in xm

Does anyone know what happened to the "delete" subcommand in the xm tool for managing virtual machines on CentOS (5.3)? How do you delete a domain without it?

  • You have two options to remove a virtual machine depending on what you're trying to accomplish.

    If you're looking to really delete the machine for good:

    • If the machine is running then stop the machine with "xm destroy "
    • Delete any disk images, remove any lvm volumes, or iSCSI targets associated with the machine. You can find what the virtual machine was pointing too by looking at the configuration file in /etc/xen/.
    • Finally you can delete the configuration file /etc/xen/

    This process should be easy to automate with a script.

    If you want to hang onto the virtual machine but do not want it to auto start when the Dom0 machine is booted then all you need to do is remove the symbolic link to the virtual machine's config file from the /etc/xen/auto/ directory.

drop data from systemcenterreporting DB for MOM 2005

I have inherited a MOM 2005 environment and its unmanageable. The Onepoint DB is 50 GB and Systemcenterreporting DB is around 700 GB (yes 700 !!) I am going through all the normal cleanup process like changing the period for which SCDW stores data in DB. It was configured 900 days and by changing it 5 days at a time and running grooming I have reached to 370 days. my goal is to keep 100 days worth of data eventually. Right now I am tired of decreasing it 5 days at a time and running grooming. I would like to just drop whole of data and start from scratch, how do I do it.

I do not want to uninstall and reinstall reporting services.

What are my options ?

  • I had the same problem in MOM 2005 when I first inherited it and was able to over come it manually. However, I don't have a 2005 database to look at, but there is a stored procedure that is performing the groom operation. What I did was script that procedure out, and then you can see the tables that have the data being purged. There is a primary table that holds most of the logging information in it that you could truncate to start from scratch, but I wouldn't recommend it. What I did was write a looping script that reduced the retention period one day at a time and called the grooming procedure until it got down to the number of days I wanted to retain. I think it took 3 days for it to complete, but I never had to monitor its execution or intervene to continue the process. Using this kind of process keeps you from orphaning records since you are using the Microsoft process for grooming the data, just taking control and automating it backwards to the point it is manageable.

    KAPes : Did you write that looping script in PL-SQL or was it vbscript/batch file kind of thing?
  • Well just followed the tedious process diligently and completed the cleanup.

    From KAPes

Can't display error document : apache 2

My httpd.conf is ..

<VirtualHost *:80>
DocumentRoot /path/to/dir/

ErrorDocument 403 /my403.html
ErrorDocument 404 /my404.html
ErrorDocument 500 /my500.html

<Directory "/path/to/dir/ ">
AllowOverride None
Options -Indexes
Order deny,allow
Deny from all
<FilesMatch "\.(JPG|jpg|jpeg|gif|png|css)$">
allow from all


In the deep page of the hierarchy (/path/to/dir/) being displayed default page What's wrong

access log is

client denied by server configuration: /path/to/dir/my403.html

  • You've don't have html files in the list of allowed files, so they are denied by default. This includes your custom error pages, so those don't show up either!

    I would create a separate subdirectory for your error pages, and specifically allow html pages to be served from that subdir.

    From Joe

Default rsize and wsize for nfs v3

The /etc/fstab entry used by my nfs client is {server_ip}:/home/{server_user}/{server_path} /home/{client_user}/{client_path}

I wanted to know the rsize and wsize values used by default. I wanted to try out some benchmarks with various values smaller and larger than the default values so that i can arrive at an optimum value for my read heavy setup.

  • Don't modify rsize and wsize. Recent NFS servers and clients will work out the best value for you and will probably do a better job than you would as well. :)

  • If you are on a Linux box, take a look at /proc/mounts. You'll find the current value there. If you specify no value in /etc/fstab, that will be the default (or at least, a value that is considered sane by the system).

    From wzzrd
  • I believe the default rsize and wsize are 8192 for both UDP and TCP mounts.

  • These are the options we use most of the time


    From James

Suggestions for a LDAP Gui suitable for a help desk to manage users

I am running a Solaris 10 environment using Sun Directory Server (LDAP) 5.2 and now 6.3 for managing user accounts. So far I have been managing the environment via scripts to add users and groups but would like to pass this responsibility over to the help desk. Since they are not LDAP savvy I would like to give them something like a web front end to the People and Groups organizational units of the LDAP tree.

Can you suggest a suitable tool that would mask the complexities of LDAP from non-technical users but still enable them to manage the user accounts?

  • We used phpldapadmin and JXplorer for "less" technical people to manage OpenLDAP, though YMMV for Sun's LDAP. Typically these were developers, but these tools are both pretty easy to learn sufficiently for managing users.

    douglasj : Thanks for the suggestions, I've looked at those before but they still let too much of the LDAP directory layout leak through to the Help Desk users. Good for our Admins though.
    From jtimberman
  • Maybe Gosa will fit... I haven't try it yet but it's on my TODO. From the homepage it says:

    GOsa² provides a powerful GPL'ed framework for managing accounts and systems in LDAP databases. Using GOsa² allows system administrators to easily manage users and groups, fat and thin clients, applications, phones and faxes, mail distribution lists and many other parameters. In conjunction with FAI (Fully Automatic Installation), GOsa² allows the highly automated installation of preconfigured systems. GOsa² therefore provides a single, LDAP-based point of administration for large and small environments, thus making the administration of users and systems and all related parameters manageable and easy.

    douglasj : Thanks, I haven't heard about GOsa before, it looks interesting. I've got it in a Debian VM to test it against out LDAP directories. I'll comment on what I find out.
    From Julien

Run script when user logs on or off

I am looking for a way to run a script1 when a user logs in2 AND and when they log out3 on their4 Windows5 machine.


  1. Perfect world script would mean Powershell script, but any script or application would be acceptable.
  2. Log in can mean at machine start too, but user log in would be ideal.
  3. Log out can mean machine shutdown too, but user log out would be ideal.
  4. This machine is not on a domain.
  5. The Windows version is Windows Vista.
  • On a non domain machine you can edit the local machine policy to run scripts at startup and shut down (this may not work in Vista Home edition).

    To access this go to start, run then enter gpedit.msc.

    In here, expand the user configuration node, then windows settings, then you will find a scripts option where you can set the location of scripts to be run at logon and logoff.

    alt text

    From Sam Cogan
  • Windows Vista's Task Scheduler also supports running scripts at those events.

    From KAPes

Domain Controller Adds Unwanted DNS entries

I have a Windows Server 2003 Domain Controller, that I will call FOO. FOO's internal IP was recently changed from to .

This change has caused two extra DNS entries to exist for FOO besides the correct one: and . These subnets don't even exist on my network. When I try to delete them from DNS on every domain controller they just come back.

How do I get rid of these DNS entries so they do not come back?

  • Do you have any VMware products installed on FOO? It looks to me like you might have IPs for VMware virtual NICs being registered in the DNS.


    Sure thing. By default these NICs don't have TCP/IP bound to them. It sounds like you've got TCP/IP bound to two of them. Head into the "Network Settings" on the host OS. In the TCP/IP properties on each interface, go into the "Advanced" settings and uncheck the "Register this connection in DNS" checkbox on the DNS tab.

    If you're running a DNS server on that machine too, alter the "Listen on:" setting on the "Interfaces" tab of the properties of that server in "DNS Management" to include only the IP address you want it to register / listen on for DNS.

    Kyle Brandt : Right on, there a work around to keep these interfaces from being populated into DNS?
    Evan Anderson : Rather than comment, I edited my posting. Glad to see my psychic powers paid off again... *smile*
  • your DC FOO have only one LAN interface?

Is ._ (dot underscore) from Mac OS X?

I'm cleaning up some files that have been handled by Mac OS X. This is also the destination for FTP transfers. I know about .DS_Store files. However I'm seeing some ._XXXX files, ie. files that are prefixed with a dot and an underscore. Is that some sort of Mac OS X backup / transfer file? Where would they come from?

  • Yes, this is an OSX thing and is related to the AppleDouble File Format. When OSX writes to a non-native file system (so not HFS), that does not support resource forks, it writes extended info such as finder information in a "._" hidden file.

    From Sam Cogan
  • Those files are indeed from Mac OS X. I imagine they're a metadata file of some kind, but I don't know what they do. I do know, however, that they can be successfully deleted without any negative repercussions .

    Gordon Davisson : It's not always safe to delete the AppleDouble (._) files -- they contain, among other things, the resource fork and extended attributes of the original file. Apple has deprecated resource forks, but they're still used for some things (e.g. aliases and .webloc files store their target info in the resource fork). Extended attributes, on the other hand, seem to be getting more and more important...
    From Kevin M
  • Apple has a page about this :

    From radius
  • If the files are visible on a Mac, the dot_clean command line tool might help. There is a man page.

Permissions on a webserver for pmwiki

I want to install pmwiki on a webserver but I'm not sure which files need which permission. Can someone point me to a good guide about Owner/Group/Other and Read/Write/Execute permissions?

How to add an email forwarding in Exchange 2003

In our company, some people want that all their emails are forwarded to an external email address (like gmail, for example). How is it possible to set this up in Exchange 2003 ?

  • This is relatively easy to accomplish on Exchange. It can also be turned on in the Outlook client with rules.

    Hondalex : +1 For all the tutorial with pictures.
    From GregD
  • Add a contact card for their Gmail account. Then in AD under exchange General go into delivery options and hit forward to. Choose the alias that represents the Gmail account. You could also check the box that says deliver messages to both.

  • Very easy to do:

    • Using Active Directory Users and Computers on a machine with the Exchange admin tools installed, create a mail-enbled contact object in the Active Directory and assign it the user's desired forwarding address ("", etc).

    • Add the newly-created contact as an alternative delivery recipient on the "Exchange General" tab of the properties of the user who wants the email forwarding by clicking "Delivery Options", selecting the "Forward To" radio button, clicking "Modify", and choosing the newly-created contact from the AD. Optionally check the "Deliver messages to both forwarding address and mailbox" if you so choose.

CPU (or Memory/etc) plugin for Munin and Mac OSX?

I install munined on Mac OSX, however there is no CPU or Memory or very basic plugins installed. On the system /opt/munin/lib/plugins it has lots of advanced plugins like apache or bind9, but nothing for the basics. What could be going on?

  • The plugins don't exist as Munin was developed from a Linux standpoint.

    If you need the functionality, I suggest that you review how the Linux plugins for your needs work; then port them to Mac OS X.

    You would want to review the man pages for:

    • vm_stat
    • iostat

    They are different to what you would expect from a Unix system.

    Rory McCann : Oh brilliant! :)
    From dezwart

Is it possible track Google CSE?

I have many Google Custom Search engines that I need to track. In other words, I need to know the search terms of the users. Otherwise, it is hard to improve the engines. How can I do it?

  • If you're using Google Analytics, you could enable "Site Search":

    Google Help : How do I set up Site Search for my profile?


    Here is a tutorial: Know What Your Customers Want: Analyze Internal Search Data With Google Analytics

    On Step 6 it's important that you enter this exact formula into the Field A -> Extract A field:


    Screenshot Google Analytics

    hhh : I did it, but I have a google CSE in iFrame and a standard engine from Google Sites. It does not track the Google CSE in iFrame, but it tracks the standard engine in Google Sites. I have no idea how to track the Google CSE.
    From splattne
  • If the results page of your Google CSE is on your site, you can just analyze your access logs to pull the search terms out. You should see URLs like

    From chaos

PPTPd: 'initial packet length 4930 outside (0 - 220)'

I got home today and was greeted by a pile of emails from logcheck, informing that pptpd was upset. Here is a snippet:

Jun 26 20:02:37 lazarus pptpd[3060]: MGR: initial packet length 4930 outside (0 - 220)
Jun 26 20:02:43 lazarus pptpd[3060]: MGR: initial packet length 4930 outside (0 - 220)
Jun 26 20:03:52 lazarus pptpd[3060]: MGR: initial packet length 4930 outside (0 - 220)
Jun 26 20:04:04 lazarus pptpd[3060]: MGR: initial packet length 22415 outside (0 - 220)

It seems to have been happening about twice a minute for the last couple of hours.

Any clues what it might be?

I've started a tcpdump, and with any luck that will turn something up...

  • It turns out that a bunch of BitTorrent clients in Germany got it in their heads that they might be able to handshake with my pptpd... Why they wanted to do that I can only guess, but that explains the crazy packets.

  • You should re-consider if you really want to be running PPTP in 2009 as the encryption is known to be fairly weak.

    OpenVPN seems to be general choice of simple VPN options (or just using SSL with select services)

    David Wolever : Ah, that's good to know – thanks.
    From LapTop006

WSS 3, two sites on one IIS 6 and Srs

We are running wss 3 with integrated reporting services. The sharepoint site is running fine and so is reporting services. We're using ntlm authentication. We added another sharepoint site on the same front end server, in IIS we added another ip address for ports 80 and 443 to function properly. I published the site through the firewalls, it works beautifully, accessible from the internet to both sites, ex. However Reporting services broke when we added the second site once I canged Alternate access mappings to Https:// It says "Error" Once I returned the AAM back to http://servername:portnumber reporting services worked again. However the second sharepoint site is not accessible from the internet like we want it to be. Any ideas on how to fix this so reporting services doesn't break?

  • Some what sounds like you need to add a host header for the second site and set the IP address to '(All Unassigned)'. This might take care of the second site not working. It might also fix the other issue too.

    From Tim Meers

How to easily view Failed Request Logs?

I had hoped I could just view the XML files in IE, but I get the following error:

"Security settings do not allow the execution of script code within this stylesheet."

I don't think I've done anything special to my setup. What is the intended way to easily view these logs out of the box?

  • Update:

    Adding "about:internet" to the list of trusted sites in Internet Explorer should fix it. (from this forums post)

    Select "Internet Options" from the IE "Tools" menu. Select the "Advanced" tab, and scroll down to Security and check the box "Allow active content to run in files on My Computer" and click the "Apply" button.

    splattne : See my updated answer.
    From splattne

Adding bcd store entry to add in legacy win 2003 dual boot option.

What settings to you use to add a legacy win 2003 config to the bcd store on win2008 server to enable the dual boot option. I tried to use easybcd, but I think it is a crock and it added a legacy boot.ini option which I don't think was their in the first place. I think it was {bootmgr}. Any help would be appreciated, as I am completely in the dark.

regards Bob.

Linux Gigabit driver that survives resume

Currently, I'm using the r8169 driver (Realtek 8169 gigabit ethernet). I tried both the chips on the mainboard and with an external network card. When I boot my PC, the machine comes up with speed = 1000 and speed is as expected.

When I resume after suspend to disk, the speed drops to 100. The driver doesn't support renegotiation or setting the speed with ethtool. Sometimes, I can fix the issue by rmmod r8169 the driver and loading it again. But lately, the chip doesn't come up completely, either the speed is 10 or "up" is false.

I'm sick of this issue. Can someone recommend a network driver (and a gigabit network card) that survives suspend/resume?

  • The Intel e1000 in my laptop survives resume, but only ~20 times before thinking it's linked at 100/full (except plug in a 100mb switch and it's dead)

    Anything that can handle PCI hotplug should be OK.

    Aaron Digulla : I got an e1000 based card and this driver seems to be better. Thanks!
    From LapTop006

Hidden Features of IIS (6.0 / 7.0)

In the long tradition of having hidden features, let us have a list of hidden features in IIS 6.0 / 7.0
Do put one feature per answer. e.g. Integrated Pipeline
Inspired by the question on IIS 6.0 vs 7.0

Also See:
Hidden Features of Linux
Hidden Features of PowerShell
Hidden Features of Oracle Database
Hidden Features of Solaris/OpenSolaris
Hidden Features of SQL Server

  • command line > iisreset

    restarts IIS service

    From Andrija