Wednesday, January 26, 2011

Registering a co.za using private nameservers

Hi Everyone

I'm having some difficulty registering a co.za domain using my own name servers. I'm new to this so please excuse and newbie mistakes and questions.

I'm using BIND 9.7.1-P2 and have followed all the tutorials I can find. But when I try register the co.za domain I get the following:


Provided Nameserver information Primary Server : ns1.maximadns.co.za @ 41.185.17.58 Secondary 1 : ns2.maximadns.co.za @ 41.185.17.59

Domain "maximadns.co.za", SOA Ref (), Orig "" Pre-existing Nameservers for "maximadns.co.za":-

Syntax/Cross-Checking provided info for Nameserver at 6a: ns1.maximadns.co.za @ 41.185.17.58 IPv4: 41.185.17.58 ==> [WARN: No PTR records!] FQDN: ns1.maximadns.co.za ==> [WARN: No A records!]

Syntax/Cross-Checking provided info for Nameserver at 6e: ns2.maximadns.co.za @ 41.185.17.59 IPv4: 41.185.17.59 ==> [WARN: No PTR records!] FQDN: ns2.maximadns.co.za ==> [WARN: No A records!] ! ! The message "No PTR records?" indicates that the reverse domain | information has not been configured correctly. ! ! ! The message "No A records?" means that name of the Nameserver specified can not be resolved. ! This can be ignored if the specified Nameserver is a child of the | domain application. !

Adding application Checking quoted Nameservers....

NS1-1 FQDN: ns1.maximadns.co.za. NS1-1 IPV4: 41.185.17.58 NS1-1 ORIGIN: ns1.maximadns.co.za. NS1-1 E-MAIL: hostmaster@maximasoftware.co.za. NS1-1 SER-NO: 2010081601 NS1-1 NS RECORD1: ns1.maximadns.co.za. NS1-1 NS RECORD2: ns2.maximadns.co.za.

NS2-1 FQDN: ns2.maximadns.co.za. NS2-1 IPV4: 41.185.17.59 NS2-1 ORIGIN: ns1.maximadns.co.za. NS2-1 E-MAIL: hostmaster@maximasoftware.co.za. NS2-1 SER-NO: 2010081601 NS2-1 NS RECORD1: ns1.maximadns.co.za. NS2-1 NS RECORD2: ns2.maximadns.co.za.

ERROR: No valid nameservers found - rejecting request.


I did provide IPv4 glue records when specifying the nameservers for the domain registration. From what I understand that error means that there are no A or PTR records for the domain found on the specified servers. But what confuses me is when I use Dig to check if my name servers are working, I seem to get the correct response (well according to the tutorials I've read).

When I do a 'dig @41.185.17.58 maximadns.co.za' I get the following response:


; <<>> DiG 9.3.2 <<>> @41.185.17.58 maximadns.co.za ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1364 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2

;; QUESTION SECTION: ;maximadns.co.za. IN A

;; ANSWER SECTION: maximadns.co.za. 21600 IN A 41.185.17.62

;; AUTHORITY SECTION: maximadns.co.za. 21600 IN NS ns1.maximadns.co.za. maximadns.co.za. 21600 IN NS ns2.maximadns.co.za.

;; ADDITIONAL SECTION: ns1.maximadns.co.za. 21600 IN A 41.185.17.58 ns2.maximadns.co.za. 21600 IN A 41.185.17.59

;; Query time: 53 msec ;; SERVER: 41.185.17.58#53(41.185.17.58) ;; WHEN: Wed Aug 18 10:08:23 2010 ;; MSG SIZE rcvd: 117


And when I do a 'dig @41.185.17.58 -x 41.185.17.58' I get the following response:


; <<>> DiG 9.3.2 <<>> @41.185.17.58 -x 41.185.17.58 ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1660 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2

;; QUESTION SECTION: ;58.17.185.41.in-addr.arpa. IN PTR

;; ANSWER SECTION: 58.17.185.41.in-addr.arpa. 21600 IN PTR ns1.maximadns.co.za.

;; AUTHORITY SECTION: 17.185.41.in-addr.arpa. 21600 IN NS ns1.maximadns.co.za. 17.185.41.in-addr.arpa. 21600 IN NS ns2.maximadns.co.za.

;; ADDITIONAL SECTION: ns1.maximadns.co.za. 21600 IN A 41.185.17.58 ns2.maximadns.co.za. 21600 IN A 41.185.17.59

;; Query time: 41 msec ;; SERVER: 41.185.17.58#53(41.185.17.58) ;; WHEN: Wed Aug 18 10:09:54 2010 ;; MSG SIZE rcvd: 140


Just for the record I'm not doing these dig queries on the server itself, I'm doing them from my personal PC which is not on the same LAN as the server, so they are being performed over the Internet. I'm aware that I'm specifying my server for these dig queries, but unless I misunderstand when I specify Glue Addresses when registering the domain it will explicitly use those IP Addresses as the name servers.

This is the point I'm stuck at, when I try register the domain it says my name servers aren't valid, but when I test my name servers they are "working". Either I'm testing incorrectly and/or have misunderstood some or all of the concepts of DNS.

Any help/advice/pointers you can afford to offer would be greatly appreciated.

Thanks in advance.

  • The issue is with your PTR records they do not exist for 41.185.17.58 41.185.17.59.

    host 41.185.17.58
    Host 58.17.185.41.in-addr.arpa. not found: 3(NXDOMAIN)
    

    From what i can see that block belongs to web africa, you need to get them to deligate your part to you or create PTR records for you.

    17.185.41.in-addr.arpa. 522 IN  SOA smtp1.wadns.net. noc.webafrica.co.za. 2008120678 14400 600 86400 14400
    
    Brendon : Thanks! That makes sense since that server is being rented from them. I will submit a ticket with them
    From topdog

Anonymous access in IIS is prompting for credentials

I'm trying to set up anonymous access for my LAN on IIS on Windows XP.

The problem is that when I navigate to the site via a web browser, it asks for the username and password.

Here are the settings in IIS > Website Properties > Directory Security:

[X] Anonymous access  
Username: IUSR_computername 
Password: ********** 
[X] Allow IIS to control password  

[ ] Basic authentication  
[X] Integrated Windows authentication

Note: the computer's name was changed, so IUSR_computername is actually an old name for the computer. However, it's the same name of the account I see in Computer Management > System Tools > Local Users and Groups.


I tried changing the password in Computer Management for IUSR_computername, then in IIS unchecking "Allow IIS to control password" and entering in the same exact password, but that didn't help.

Update: I'm trying to set up a virtual directory which is hosted in My Documents folder. From what I understand, this isn't working because the IUSR account doesn't have access to the folder. I confirmed it by trying to use a folder under C:\ and it worked fine.

So I guess my question is how can I keep my folder in the My Documents folder, but also not give too much permissions to the IUSR account? For example, I don't want to add the user to the Users group in Windows, since that would probably give the user too many privileges (e.g. even on other sites). Also, I don't want to use my own username/pw (instead of IUSR), since that would give this anonymous site a user with too many privileges (my account is an administrator on this machine).

Ideally I would want to use a low level user (e.g. IUSR), but selectively give it access to only this one folder in My Documents. Is that possible?

  • In order for the user account used by IIS for anonymous access to actually access a folder, it needs NTFS-level permissions on that folder; so, if you want to publish some folder in IIS anonymously, you'll need to give that account at least read permissions on that specific folder.

    From Massimo
  • You must make sure that the IUSR account has permissions to view that folder:

    • Either place the file in a folder where the user has permissions (e.g. C:\ instead of My Documents)
    • Or, give reading privileges to that account via the Security tab in Windows. (Windows XP users not on a domain will need to enable this tab).
    From Senseful

Looking for an addressbook (web)application that read/writes LDAP.

In an existing infra-structure based on Debian/Cyrus/Exim/Horde/LDAP I'm trying to make all the clients (OSX and Windows 7, preferably also iPhone and blackberry) work with a centralized LDAP addressbook database. Right now horde is used for webmail. For email this works fine, but I'm struggling to make a central addressbook that's readable for all clients.

I can't make Horde use the LDAP addresbook that Thunderbird uses (Authentication works fine though). The app we use to write to the LDAP database is homegrown and we need advanced features, like mailinglists. Non-geeks need to be able to use this, so I can't use barebones ldap editoprs like LDAPBrowser and LDAPphpadmin.

I've two questions :

  • Does anybody have a suggestion for an addressbook application that can read and write to an LDAP addressbook so that Thunderbird and Apple Mail users can use it?
  • Would it be better to rework the complete email infra-structure and use something like Zimbra?

I strongly prefer an open source solution.

  • Really, don't think about get this going with a homegrown solution. You will experience so many problems and incompatibilities that you will go mad over it and still have no working solution.

    As an example: Both Thunderbird and Apple Mail can read LDAP address books, but neither can write them, and IIRC there are minor differences in the required schema to make interaction difficult.

    Zimbra, OpenExchange, Zarafa etc. as full grown groupware suites make all this considerable easier or possible at all with specialized connectors for all kind of applications. As a matter of fact, they did all the work necessary to get all the components play together for you, as i.e Zimbra is heavily based on open source components.

    From SvenW

Using smartctl to get vendor specific Attributes from ssd drive behind a SmartArray P410 controller

Hi!

Recently I have deployed some HP server with SSD's behind a SmartArray P410 controller. While not official supported from HP the server work well sofar.

Now I like to get wear level info's, error statistics etc from the drive. While the SA P410 supports a passthru of the SMART Command to a single drive in the array the output I was not able to the the interesting things from the drive.

In this case especially the value the Wear level indicator is from interest for me (Attr.ID 233), but this is ony present if the drive is directly attanched to a SATA Controller.

smartctl on directly connected ssd:

# smartctl -A /dev/sda
smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8
Bruce Allen
Home page is http://smartmontools.sourceforge.net/

    === START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 5
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE     
UPDATED  WHEN_FAILED RAW_VALUE
  3 Spin_Up_Time            0x0000   100   000   000    Old_age  Offline  In_the_past 0
  4 Start_Stop_Count        0x0000   100   000   000    Old_age  Offline  In_the_past 0
  5 Reallocated_Sector_Ct   0x0002   100   100   000    Old_age  Always       -       0
  9 Power_On_Hours          0x0002   100   100   000    Old_age  Always       -       8561
 12 Power_Cycle_Count       0x0002   100   100   000    Old_age  Always       -       55
192 Power-Off_Retract_Count 0x0002   100   100   000    Old_age  Always       -       29
232 Unknown_Attribute       0x0003   100   100   010    Pre-fail Always       -       0
233 Unknown_Attribute       0x0002   088   088   000    Old_age  Always       -       0
225 Load_Cycle_Count        0x0000   198   198   000    Old_age  Offline      -      508509
226 Load-in_Time            0x0002   255   000   000    Old_age  Always   In_the_past 0
227 Torq-amp_Count          0x0002   000   000   000    Old_age  Always   FAILING_NOW 0
228 Power-off_Retract_Count 0x0002   000   000   000    Old_age  Always   FAILING_NOW 0

smartctl on P410 connected ssd:

# ./smartctl -A -d cciss,0 /dev/cciss/c1d0
smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

(Right, it is complety empty)

smartctl on P410 connected hdd:

# ./smartctl -A -d cciss,0 /dev/cciss/c0d0
smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

Current Drive Temperature:     27 C
Drive Trip Temperature:        68 C
Vendor (Seagate) cache information
  Blocks sent to initiator = 1871654030
  Blocks received from initiator = 1360012929
  Blocks read from cache and sent to initiator = 2178203797
  Number of read and write commands whose size <= segment size = 46052239
  Number of read and write commands whose size > segment size = 0
Vendor (Seagate/Hitachi) factory information
  number of hours powered up = 3363.25
  number of minutes until next internal SMART test = 12

Do I hunt here a bug, or is this a limitation of the p410 SMART cmd Passthru?

  • I'm sorry that I can't offer any solution, but I can verify that I get similar results on the Smart Array P400.

    I'd be very interested to know what, if anything, you figure out. What SSDs are you using and what configuration?

    Lairsdragon : We were using Intel X25M G1 160 GB SSD's in a RAID 1+0 Stripeset. Recently we have replaces the Intel X25M G1 with Intel X25M G2, since the G1 series produces Timeout on the Controller on heavy write load.
    From John

Find out why Linux chat script for ppp connection fails

Hi everybody, I'm having trouble making a ppp connection over a GSM Modem. The Platform is an ARM based embedded device, running Debian Linux 5. The scripts worked before with this device, but not with the new shipment. I just can't get enough information out of chat (/usr/sbin/chat).

The connection is started out of a C Program and the call looks something like this:

/usr/sbin/pppd ttyS1 connect /usr/sbin/chat -S -s -v -T PIN-Nr -f /etc/chatscripts/chat_gprs_con

I have tracked the Problem down to chat, which handles the communication with the modem hardware.

/usr/sbin/chat -e -v -T PIN-NR -f /etc/chatscripts/chat_gprs_con

chat_gprs_con looks like this:

TIMEOUT         10
ECHO            ON
ABORT           '\nBUSY\r'
ABORT           '\nERROR\r'
ABORT           '\nNO ANSWER\r'
ABORT           '\nNO CARRIER\r'
ABORT           '\nNO DIALTONE\r'
ABORT           '\RINGRING\r\n\r\nRINGRING\r'
""      AT
'OK-\d+++\d\d\c-OK'     ATZ
TIMEOUT         3
OK      AT+CSQ
OK      ATE1
OK      AT+CPIN?
'CPIN: READY-AT+CPIN="\T"-OK'   'AT+COPS?'
OK              'at+cgdcont=1, "IP", "a1.net"'
OK              ATD*99***1#
TIMEOUT         25
SAY     "\nwaiting for connect...\n"
CONNECT         ""
SAY     "\nConnected."
SAY     "\nIf the following ppp negotiations fail,\n"
SAY     "try restarting the phone.\n"

The only info I get throug the verbose output in /var/log/syslog or /var/log/messages is:

Jan  1 00:12:30 evm chat[1405]: timeout set to 10 seconds
Jan  1 00:12:30 evm chat[1405]: abort on (\nBUSY\r)
Jan  1 00:12:30 evm chat[1405]: abort on (\nERROR\r)
Jan  1 00:12:30 evm chat[1405]: abort on (\nNO ANSWER\r)
Jan  1 00:12:30 evm chat[1405]: abort on (\nNO CARRIER\r)
Jan  1 00:12:30 evm chat[1405]: abort on (\nNO DIALTONE\r)
Jan  1 00:12:30 evm chat[1405]: abort on (\RINGRING\r\n\r\nRINGRING\r)
Jan  1 00:12:30 evm chat[1405]: send (AT^M)
Jan  1 00:12:30 evm chat[1405]: expect (OK)
Jan  1 00:12:40 evm chat[1405]: alarm
Jan  1 00:12:40 evm chat[1405]: send (\d+++\d\d)
Jan  1 00:12:43 evm chat[1405]: expect (OK)
Jan  1 00:12:53 evm chat[1405]: alarm
Jan  1 00:12:53 evm chat[1405]: Failed

But I can't find out WHY it fails :(

Any ideas and help are very apprechiated! Thanks, Ben

  • add debug to your pppd config file

    From topdog
  • It looks like chat is receiving no reply from the modem/serial port.

    Try connecting to the modem using minicom and see what happens when you type stuff in by hand.

    You might also want to compile a copy of serlook for your platform.

    From symcbean
  • Well for it looks like the Modem does not answer at all. You could check for baudrate, and hardware handshake.

    Since our are talking about an embedded plattform you should make shure the GSM module is powered on since some plattforms allow to poweroff the module to save power.

File download speed windows server 2003 over long distance

Hi there,

The company I'm working for currently has a server situated in Hong Kong, serving content to mobile phone user OTA via their operator's APN, mainly to operators from the south east asia region.

We adopted the OMA OTA provisioning approach such that we only count a download being successful when we received a 900 Success from the Install-Notify response, otherwise count as download failure.

However, there are tonnes of errors including 907 Invalid JAR, 902 Loss of service, etc. I am talking about an error ratio of 97%. I have went thru the code path and examined the content of the JAD, JAR and the manifest file are valid.

I start suspecting the high number of errors is due to the pathway from our server to the destination being too convoluted.

I tried tracert from the server (windows server 2003 R2) to the designated APN and found out it is essentially going thru Japan and US and didn't managed to get all hops within the limit of 30.

Is there anything I can do to be more certain that the high number of download failures are due to the geographical distance rather than anything. Having said that we have performed UAT test for end-to-end within Hong Kong however that doesn't imply things would work the same outside the region.

We are using IIS6 with ASP.NET2.0, the server is sitting on a network backbone from a data center where I know they have a high speed link to Japan.

Many thanks.

  • Distance is not an excuse. Most of us work over 20'000km links without problems (except annoying ping). I would really try to move server closer to the users, or at least try to use different datacenter in HK.

    codemonkie : Ok, thanks for your suggestion BarsMonster. Now I need to gather some real evidence to present to the boss to support a datacentre move.

How can website access certificate store on my client machine?

What web technology allows a website to access a certificate store on my client machine?

Do web browsers allows websites to access certificate stores? Can you configure a browser such as IE to restrict a website from accessing certificate stores?

  • Do web browsers allows websites to access certificate stores?

    No, except in support for client side certificates (and the browser might have an option to always require some user interaction to confirm this), or—for Internet Explorer—ActiveX controls have access to everything the user has access to, this includes their certificate store.

    In IE: Internet Options | Security | <Select Level> | Custom Level... | Don't prompt for client certificate selection when no [...]

    I.e. IE will always prompt if there is not exactly one matching certificate, and optionally always.

    From Richard

iptables port forwarding

I want to forward trafic destined at port 100 to 127.0.0.1:101. The actual goal is to forward to a different IP:PORT, but for the sake om just getting stuff to work I have a socket listening on *:100. From this site, and google "iptables port forwarding howto", I've been lead to belive the syntax is as follows, which is part of my ruleset.

#Set default policies to DROP

iptables -P INPUT DROP

iptables -P FORWARD DROP

iptables -P OUTPUT DROP

#Flush ruleset

iptables -F

iptables -t nat -F

iptables -t filter -F

#Allow local access

iptables -A INPUT -i lo -j ACCEPT

iptables -A OUTPUT -o lo -j ACCEPT

#Allow ESTABLISHED,RELATED

iptables -A INPUT -i eth0 -m sate --state ESTABLISHED,RELATED -j ACCEPT

iptables -A OUTPUT 1 -o eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT

#Allow outbound SYN requests

iptables -A OUTPUT -o eth0 -m state --state NEW -j accept

#### The routing related

# Allow SYN requests for the port-to-be-forwarded

iptables -A INPUT -i $INET_IFACE -p tcp --dport 100 -m state --state NEW -j ACCEPT

# Route to 127.0.0.1:101

iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 100 -j DNAT --to-destination 127.0.0.1:101

# Accept the forward

iptables -A FORWARD -t filter -i eth0 -p tcp --dport 101 -j ACCEPT

# Accept all related in forward

iptables -A FORWARD -t filter -i eth0 -p tcp -m state --state ESTABLISHED,RELATED -j ACCEPT

My sysctl settings are:

# sysctl -p

net.ipv4.ip_forward = 1

net.ipv4.conf.default.rp_filter = 1

net.ipv4.conf.all.rp_filter = 1

net.ipv4.tcp_syncookies = 1

net.ipv4.conf.all.accept_source_route = 0

net.ipv4.conf.default.accept_source_route = 0

net.ipv4.conf.all.accept_redirects = 0

net.ipv4.conf.default.accept_redirects = 0

net.ipv4.conf.all.secure_redirects = 0

net.ipv4.conf.default.secure_redirects = 0

net.ipv4.icmp_echo_ignore_broadcasts = 1

net.ipv4.conf.all.send_redirects = 0

net.ipv4.conf.default.send_redirects = 0

net.ipv4.conf.all.log_martians = 1

kernel.randomize_va_space = 1

# cat /proc/sys/net/ipv4/conf/eth0/forwarding

1

nmap states the port is closed for connect() and SYN scans, but open|filtered for FIN and Xmas scans.

What am I missing ?

  • Are you trying to redirect traffic originating from your own machine ?

    If so, you should also add a DNAT rule to the OUTPUT chain of the nat table. The PREROUTING chain will only process packets coming from other hosts.

    From b0fh
  • Port forwarding only really makes sense from one ethernet interface to another. Therefore I would have to presume that $INET_FACE is not the same as eth0. For the purposes of what follows, I will assume it to be eth1.

    In that case you need the following rules:

    iptables -A INPUT -i eth1 -p tcp --dport 100 -m state --state NEW -j ACCEPT
    iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 100 -j DNAT --to-destination 127.0.0.1:101
    iptables -A FORWARD -t filter -i eth1 -p tcp --dport 101 -j ACCEPT
    iptables -A OUTPUT -t filter -o eth0 -p tcp --dport 101 -j ACCEPT
    

    This opens the inward path. However, since your default policy is to drop packets that have no matching rule, you also need some rules for the return traffic.
    I am not a friend of overcomplicating firewalls built with iptables, you can easily end up in a situation where you have a really hard time understanding what's going on. Therefore I would recommend to change the default policy to ACCEPT for the FORWARD chain, and control the traffic primarily through the INPUT chain and, if necessary, the OUTPUT chain.

    If the above doesn't provide enough detail to move forward, just post a comment, and I can provide further pointers. There are some good diagrams on traffic flow through iptables, I particularly like this one and this one. These show which tables and chains are used, and should allow you to formulate your rules. Always remember, you need rules for both directions of traffic flow.

    Thomas : Actually, my incoming interface is eth0 and the one I attempt to forward to is vmnet8. I attempted to adobt your suggested rules, but failed. Therefore I changed the default policies to ACCEPT and added the following rules: iptables -t nat -A PREROUTIMG -i eth0 -p tcp --dport 100 -j DNAT --to-destination vmip:101 iptables -t FORWARD -j ACCEPT iptables -t OUTPUT -j ACCEPT But still no success. A syn request is received on vmnet8 and a syn,ack reply is sent. tcpdump does not see the reply on eth0
    From wolfgangsz
  • Solution was partly based om wolfgansz. As I was not originally registered as a user on serverfault, and have since cleared my cookies, it doesnt seem that I can just post a comment.

    Default policies are DROP for INPUT and OUTPUT chains, and ACCEPT for FORWARD.

    function add_forward {
    # $1 = title
    # $2 = internal host
    # $3 = external port
    # $4 = internal port
    if [  "$2" == "" ] || [ "$3" == "" ] || [ "$4" == "" ]; then
      echo Skipping forward $1
    else
      echo "Forwarding port "$3" to "$2" port "$4" ("$1")"
      $IPT -t nat -A PREROUTING -p tcp --dst $MYIP --dport $3 -j DNAT --to-destination $2:$4
      $IPT -t nat -A POSTROUTING -p tcp --dst $2 --dport $4 -j SNAT --to-source $VMNETIP
      $IPT -t nat -A OUTPUT --dst $MYIP -p tcp --dport $3 -j DNAT --to-destination $2:$4
    fi
    }
    

    And finally to use it add_forward "My forward", "192.168.0.101" 100 101

    $MYIP is defined as the eth0 public IP $VMNETIP is the vmware NAT interface

    So all in all, this enables incoming connections on eth0:100 to be bridged through vmnet nat interface to a virtual machine..

    Hopefully, this can help someone else as well.

    The primary tool for debugging was tcpdump on both the host and guest system

    "tcpdump -i eth0 port 100" for listening on the host. This revealed a problem with me setting an incorrect IP in the POSTROUTING rule which made eth0 just drop the packets.

    Thanks for the help.

    From Thomas

Unable to logoff, disconnect, or reset terminal server user in production environment

I'm looking for some ideas on how to disconnect, logoff, or reset a user's session in a 2008 Terminal Server (unable to login as the user either as it is completely locked-up). This is a production environment, so rebooting the server or doing something system-wide is out of the question for now. Any Powershell tricks to help us with this?

We've tried to disconnect, log the user off and reset the session as well as killing the session's processes too, directly from the same terminal server (from the task manager, Terminal Services Manager and the Resource Monitor) with no results.

Help!


UPDATE: We ended up rebooting the server as no other attempts that we could think of worked. I'll leave this question open hoping someone might have more information about this one issue, and it's potential fixes

  • You can start a cmd, do a query session, check the id of the session to be killed and then do a reset session. For instance, if with query session you get that the session name rdp-tcp#1 is the one you want to kill, then you can execute reset session rdp-tcp#1 and get it killed.

    l0c0b0x : Thanks, but that didn't help either.
    From grem
  • Maybe there is a process still running, blocking the logoff process. Check the still running processes for the affected user. Then kill process one by one to see witch one is causing the problem.

    Check also the HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Run Registry key that only needed Processes are started. In 64bit it is HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Run.

    From quentin
  • I suppose the same happened today on my Win2008R2 Terminal Server. Sympthoms were: 1. He phoned me with "'connecting' message just hangs forever". He's just a simple user so I can't expect detailed problem description. 2. Tried logging off/resetting session (which usually helps in these cases) - did not work. The session still hangs in the list with 'disconnected' status. 3. Tried killing all processes for that user - did not help. Session persists and refuses to get killed.

    Solution was - connect as user (login with his credentials if you can reset his password or use some kind of remote assistance to see what happens on his computer) and see what happens in logon window. When connecting I clicked on RDP Client's 'details' button - and here it was, a error message that winlogon did something wrong, it was waiting for user to click on 'retry/ignore/etc' buttons and since it's the omnipotent winlogon it caused all that weird behavior.

    p.s. I could not found any way to really force kill a session :(

    From Andys
  • We just had a similar issue with our Windows Server 2008 R2 Remote Desktop server. The user session showed "Active" when looking at RDS Manager, but did not have the associated session ID# or connected device showing (both were blank).

    All of the tricks above did not resolve the issue. When connecting as the user in question, an error message came back stating that the Terminal Server was busy and to try again later or contact the administrator.

    We wound up rebooting the server as well.

    From

startup screen centos 5

where can I modify bootup screen in cetos. I runs different messages while booting up and I need to modiy it. thanks

  • Normally hiding the boot messages requires recompiling the kernel, so that you can start the OS in kernel quiet mode. Lately, there are some tools which enable you to do it without recompiling the kernel

    you can check splashy http://splashy.alioth.debian.org/wiki/installation

    or google "fbsplash"

    From Daniel t.

silent installion command for installing office 2007 in MDT 2010 deployment

Can any body help me out in providing some silent installation commands for installing office 2007 for MDT 2010 Deplyoment

  • Create/edit the config.xml (via Deployment Workbench -> application properties -> office products tab), while specifying "Display level: None" for a silent/unattended installation. source

Gitosis post-receive hook before initial push

I have a script on the client machine that adds the necessary configuration stuff to gitosis.conf on the server. I'm able to push and everything works correctly.

However, I want to add a post-receive hook so that when the repository is first pushed, some particular action occurs. I tried to add it to the local repository before the first version (in .git/hooks) but the hook wasn't transferred to the server on a push.

How can I do this? There is no repository in the /srv/gitosis/repositories directory until the initial push.

  • Hooks are not pushed to server via git push. Otherwise, this would arise a severe security issue: anyone can push code that is executed on your server with higher privileges.

    To workaround the issue, you may just copy the hook into the created directory, and run it manually after your first push:

    GIT_DIR=. hooks/post-receive
    

    You'll have to do this only once.

    dave paola : I'm trying to do this programatically though. These repositories are created by a script.
    Pavel Shved : @davezor, well, the most obiovus way then would be to fix gitosis itself, so it stores per-repository hooks inside gitosis repository, and saves them to the proper folders. I think such functionality (unless it's already implemented) would be useful to community as well.
  • Git uses templates to set up new repositories. I don't know if Gitorious uses these same templates, but it's worth checking out. On my system they exist at /usr/local/share/git-core/templates/hooks.

    dave paola : I checked out the gitosis documentation and it's extremely sparse. Is there a way to add a template conditionally?

How to set up a staging apt repository to securely manage upgrades

Hello,

I would like to be able to run automatic apt-get upgrade (once per hour) on our servers (Ubuntu 10.04), so that I don't have to do it manually on all of them (about 15). However, for production machines, that's not a good idea ...

So here's my idea:

Set up a local repository for all 'approved' updates for critical packages. I would then push updated packages from upstream to our local repo after I tested them, and all servers could automatically (apt-cron?) upgrade from this repository.

So my question is this: How do I configure apt on the clients so that they use the local repository only for all packages which exist on the local repository, and the upstream one for all other packages?

Does this actually make sense? Or am I missing something?

Anyways, thanks for your insight!

Andreas.

  • Hi

    I'm not an expert but I think you can do the with apt pinning.

    if you have a local repo at http://my.local.repo/ called myrepo

    then your /etc/apt/sources.list on you servers will look like this

    deb http://my.local.repo/ myrepo main contrib non-free
    
    deb http://ftp.debian.org/debian/ stable main contrib non-free
    

    then in the /etc/apt/preferences will look like this

    Package: *
    Pin: release a=myrepo
    Pin-Priority: 700
    
    Package: *
    Pin: release a=stable
    Pin-Priority: 600
    

    then apt will favour the packets from you local repo

    hope this make sense and/or works

    From Ketil

I want to use my own domain email but in gmail is it possible?

Possible Duplicate:
Outsource my email server to Google

I want to use my own domain email me@mydomain.com but in gmail is it possible?

I just want to use my own email address but with gmail interface.

just for my email. a single person's email.

  • I want to use my own domain email me@mydomain.com but in gmail is it possible?

    Yes.

    From ErikA
  • yes if you have a fully qualified mailbox its possible to do it. Go under "Settings" then "Fowarding and POP/IMAP" to setup your mail

    ErikA : What exactly is a "fully qualified mailbox"?
    CChock : fully qualified mailbox = a mail account where u can use a normal pop client like thunderbird to get your mails
    From CChock
  • Google Apps Standard will do exactly what you want

    Sign up the domain. Setup the proper DNS records. Activate email. You're good to go.

    From Jason Berg

Help Turning LocalHost into web server

Hi I have a ubuntu desktop with localhost installed and working fine. What steps do I have to take now to make the IP address of the machine accessible via the web, so people can view the contents of my server just like going to any website and typing in that URL?

    1. make sure your IP can be resolved through the internet. ie - get an internet qualified IP address.
    2. install apache or Lighthttpd ( your choice here ) + all the other things like php or MySQL with it
    3. Once all those are running u might want to get a hostname for your machine, either to tag on to your office DNS server or use one of the many free DNS services out there like DynDnS.
    4. Welcome new webmaster

    failing which if u find the steps above too short? try looking up "howtoforge" they have lots of step-by-step installation on how to get things done

    From CChock

Nginx Config: Front-End Reverse Proxy to Another Port

I have a small web server that serves requests on port 5010 rather than 80.

I would like to use nginx as a front end proxy to receive requests on port 80 and then let those requests be handle by port 5010.

I have installed nginx successfully and it runs smoothly on Ubuntu Karmic.

But, my attempts to reconfigure the default nginx.conf have not been successful.

I tried including in the server directive the listen argument for port 5010.

I have also tried proxy_pass directive.

Any suggestions on changes that need to be made or directives that need to be set in order to just have port forwarding.

  • I'm assuming that nginx is not the server listening on port 5010 as well as 80, correct? Something else is listening on 5010 and you wish to have nginx proxy to that server?

    If that's the case, here's a nice sample config I've used in the past with success:

    server {
            listen       80;
            server_name  <YOUR_HOSTNAME>;
            location / {
                proxy_pass         http://127.0.0.1:5010/;
                proxy_redirect     off;
    
                proxy_set_header   Host             $host;
                proxy_set_header   X-Real-IP        $remote_addr;
                proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
    
                client_max_body_size       10m;
                client_body_buffer_size    128k;
    
                proxy_connect_timeout      90;
                proxy_send_timeout         90;
                proxy_read_timeout         90;
    
                proxy_buffer_size          4k;
                proxy_buffers              4 32k;
                proxy_busy_buffers_size    64k;
                proxy_temp_file_write_size 64k;
            }
    }
    

    I believe that should accomplish what you're seeking. Good luck!

    Ted Karmel : Thanks this was just what I was looking for...
    From vmfarms
  • Pretty minimalist -- I've left the proxy settings as default, though you may want to look in to it to adjust to your needs.

    # NGINX configuration
    
    # System configuration ##################
    worker_processes  3;
    events {
        worker_connections  1024;
    }
    user nobody;
    
    # Web configuration #####################
    http {
        server {
            listen 80 default;
            location / {
                proxy_set_header   X-Real-IP        $remote_addr;
                proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
                proxy_set_header   Host             $host;
    
                proxy_pass http://127.0.0.1:5010/;
    
            }
        }
    }
    
    Ted Karmel : Tyler - your minimal solution is good. That's what I wanted. Would give you points if I could but still new on serverfault
    From tylerl

Fix things first or virtualize and then fix

Warning: This is a very general question.

I've walked into an environment where everything is working for the most part but it is held together with scotch tape (don't want to offend the duct tape cult).

Key points

  • Backups can't restore to different hardware
  • SAN uses Microsoft iSCSI initiator
  • Permissions are a non-documented nightmare
  • Many single points of failure
  • Servers are mostly 4-5+ years old Poor utilization
  • 20 servers across 3 offices
  • All windows servers

Should I virtualize first (already have a basic SAN) or address these issue prior to virtualizing? I think virtualizing will make fixing these issues much easier but I want to avoid garbage in garbage out. My biggest concern are the backups with Exchange and SQL servers (with middleware) not being able to restore to different hardware. I'm planning to go VMWare when the time comes.

Your thoughts.... Thanks,

  • Well, as with nearly all things, it depends. The one big win you get with virtualizing things in your situation is the ability to snapshot VMs. As you're troubleshooting/patching/fixing/etc, the ability to snap could be a godsend. Conversely, the P2V transition could throw another level of instability/unpredictability into the mix. You can always give P2V a try and if things don't work out well, you haven't really lost anything - you can always go back to the physical host.

    Chris S : +1, It depends, but probably virtualize first and work on backup at the same time (though virt may solve that).
    From ErikA
  • Personally I would look into fixing things up first, then look into improvements in the infrastructure. This way you are not introducing new complexities to the problems.

    Let me take a little bit to address the issues you brought up:

    • Backups can't restore to different hardware

    This is a MAJOR issue. You should really talk to your backup vendor and figure out why. Is it because they are doing backups that restore to bare metal and the vendor doesn't support bare metal restores unless it is the same hardware? If so you should be able to add a data only backup to the rotation. That way it may be a bit more work to come back up, but you don't lose the important stuff (the data)!

    • SAN uses Microsoft iSCSI initiator

    Why do you think this is a problem? There is nothing wrong with the microsoft iSCSI initiator, in fact I would be wary of someone who didn't use that on MS platforms. We have hundreds of boxes using the iSCSI initiator to talk to dozens of SANs without issue.

    • Permissions are a non-documented nightmare

    This ... sucks. And, happens everywhere. You best bet is to slowly chip away at documenting these. Search on this site there are a bunch of questions related to documenting permissions using scripts. But you don't want to go messing around with things before you know how they are right now.

    • Many single points of failure

    This is always a tricky one. You need buy in from the business to get them to spend the money to reduce or eleminate the SPoF. My best suggestion to you is document everything, and put together a risk analysis. Then put together a few suggested solutions and approximate costs and present it to the business owners. If they want to reduce or eliminate them then you are golden, if not all you can do is keep documenting it, and start documenting outages caused by it and bring it back up to the owners.

    • Servers are mostly 4-5+ years old Poor utilization

    There is nothing wrong with this as long as they are still under warranty. If the 4-5 year older servers are under utilized they are good candidates to be virtualized, but you should spend some time doing performance analysis to see where the utilization is - Memory, Network IO, Disk IO, processor, etc - so you can properly plan your Virtulization strategy.

    • 20 servers across 3 offices

    Once again, nothing wrong with this either. You just need to make sure that there are proper remote tools at your disposal - IP KVMs, Remote access power strips, iLO/DRAC cards,etc. In fact depending on the WAN connection, centralizing could reduce performance and manageability. Once again take a look at your use profiles for the servers.

    • All windows servers

    Absolutely nothing wrong with this, changing things because they are windows for the sake of changing them away from windows is a bad bad idea.

    So, if I was in your situation I would sit down and make a list of everything you see as needing to be changed, then organize them as most important (i.e. Data loss, Downtime) to least important (i.e. inconvenience, infrastructure improvements). Then you just work down the list fixing things one at a time until it's done.

    Virtualization is not a panacea, it may solve some of your problems, but it will introduce new issues and problems along the way. I would think long and hard before jumping in to virtualizing things without a good solid understanding of how things are now as well as how it will change the situation and what new issues it could introduce.

    Zoredache : +1 backup needs to be fixed ASAP, but most other issues tend to be very relative to your network. For a smaller environment you eliminating all SPoF is likely impossible.
    joeqwerty : Personally I don't consider the lack of the bare metal restore to different hardware capability of the backup software to be a deficiency. Sure, many backup programs have this ability but many don't as well. The lack of it doesn't make a particular backup program "bad" in my opinion. Backup software is purpose driven, to perform backups and restores. System imaging\restoration software is purpose driven, for system imaging and restoration. The inclusion of this ability in your backup software is a plus but the lack of it isn't a minus in my book.
    Zypher : @Joeqwerty: I read it as they had no other backups in place to be able to get to the data if they could not get their hands on the same hardware. I wasn't saying the Solution was the problem, more so the fact that from what I was reading they would have no other way to get at their data. So tl;dr i wasn't making a comment as to the software but the lack of - admittedly assumed - ability to get to the data any other way.
    joeqwerty : @Zypher: Gotcha. Thanks for the clarification. Also, +1 for a well thought out and well stated answer.
    From Zypher
  • I would probably fix things while virtualising. Like having 2 simultaneous infrastructures, the "new" one and the "old" one, and migrating things one by one.

    From coredump
  • "If all you have is a hammer, everything looks like a nail."

    Reevaluate why you'd want to virtualize the infrastructure. What problems are you encountering that would warrant virtualization? I personally would not virtualize the infrastructure unless you really really need the extra hardware that would be freed up for other things. You'd be introducing another issue as well: what if the hardware hosting the hypervisor dies? You might say, "I'll just use HA across two machines with VM's on them!" But what does that buy you over, say, building highly available services with services simply installed on top of a regular Windows installation?

Definitions for DNS Server Settings?

I am looking at master DNS settings on my Enom domain. I'm wondering what the difference is between the following....

www
@
*

www is a bit obvious, but what about the others? What is @ if it doesn't apply to email? At least, the MX records relate to all of the email, right?

  • The * character is used to denote a wildcard record. A wildcard is basically used to provide an answer for to questions related to records that don't exist in your zone.

    The www is just a typical name.

    The @ character is a special character that usually as shorthand for the current domain. So for the zone example.org the @ characters is shorthand for example.org.

    Also see:

    From Zoredache
  • Let's say your domain record for nowhere.net has these two entries:

    @ IN A  78.90.12.34
    * IN CNAME nowhere.net.
    

    The @ sign in a record means the domain itself with no host, i.e. nowhere.net. This would allow someone to put http://nowhere.net in their browser and resolve to 78.90.12.34.

    The * sign in a record is a wildcard for any hostname. You could enter www.nowhere.net, ftp.nowhere.net, mail.nowhere.net, yo-mama.nowhere.net, etc. and they would all resolve to the IP specified for the domain. You wouldn't have to set each one up with a separate record.

    From Roman

able to dig a hostname but doesn't resolve via ssh or ping

I am using Snow Leopard and cannot ping or ssh into a host but am able to dig:

dig some.value.host.com

When the ip address comes back in the answer section, then I am able to ssh via ip address ( ssh myname@12.45.45.12). Previously (> 1 week ago), this worked fine where I could just ssh in.

All of this is taking place over VPN. Since on VPN, I'm a little at a loss at how to figure out what is going on. Any ideas about next step to take to figure out what is going on?

Answers / Further Clafication:

Are you using split DNS? (my guess is no) - no

Is the DNS server on the other side set to resolve DNS queries for any domain or only its own? - any query

Are you able to reach the DNS server on the other side of the VPN? - yes

Are you tunneling all IP traffic or only specific traffic? - looks like all IP traffic

So, I'm using Cisco AnyConnect VPN. I'm assuming this is When you say DNS tools works at interface, would this be why I'm able to dig the west.domain.com host but not ssh to it. I'm guessing I just don't understand how exactly the tunneling is working at this level to resolve it.

I agree with most of what you're saying. I'm not sure how to control the 'which traffic to tunnel' issue. It looks like all IP traffic is going through there when connected.

Regarding the /etc/hosts file, this host is not in there.

  • Are you using split DNS? (my guess is no)
    Is the DNS server on the other side set to resolve DNS queries for any domain or only its own?
    Are you able to reach the DNS server on the other side of the VPN?
    Are you tunneling all IP traffic or only specific traffic?

    DNS tools typically use the interface's DNS server instead of querying through the OS (where Cisco's VPN client sinks its teeth). This would cause DNS tools to work but everything else to fail. The best thing to do is setup split DNS. This will specify domains that should be resolved on the other side of the VPN. Any other domains will resolve to whatever you have setup in your interface settings.

    If you can't set that up, set your DNS server to resolve all queries (be careful with this and make sure you want to do it)

    If you can't resolve DNS queries at all on the server on the other side of the VPN, figure out why. Most likely you aren't specifying the correct traffic to tunnel.

    wolfgangsz : Further explanation (in extension of @Jason's): when you are NOT connected to the VPN, then your computer will be configured to use a specific DNS server. That DNS server has probably no knowledge of any DNS entries on the other side of the VPN tunnel. Once you are on the VPN, your computer either needs to use a DNS server that is also on the VPN (and then you need to have routing in place for that) or your local DNS server has be able to forward queries to the DNS server on the other side of the VPN for resolution.
    From Jason Berg
  • Also check your local /etc/hosts file. This usually takes precedence over DNS calls. When you SSH via the hostname and the hostname exists in the hosts file, it will login with that. Your dig command queries a DNS server directly, bypassing the hosts file.

    timpone : I've run this several times and restarted computer. I think this is probably Cisco-VPN specific; perhaps dig is using a different source of information than ssh is? That's my most likely hunch and what was somewhat implied above by original responder.
    From vmfarms
  • Sounds like you may be caching an outdated DNS record. Try flushing your DNS cache. From terminal, try:

    dscacheutil -flushcache

How to attach a virtual hard disk using VBoxManage?

What is the best method for setting the virtual hard drive (VDI) of the primary controller for an existing virtual machine?

Does the syntax change if the VDI is really a child differencing disk of some other parent disk? Do you need to attach the parent VDI and then the child VDI in some way?

Situation:

I have an existing VM --- I want to replace the hard drive it uses to boot - with either another normal virtual HD or possibly a differencing disk. Can this be done with VBoxManage?

  • I'm not sure if you can do it though VBoxManage, I've always changed it through the GUI after using CloneHD, you answer may be in the VBoxManage Manuel

    From Mr Shoubs

New server configuration NIC Card

We bought 2 new HP ProLiant ML370 G6. The server has 4 1GB NIC card. and 1 Dedicated remote management port. I want to know what is the benefit of this NIC card with 4 port? Can I connect this 4 port to main switch and have a high bandwidth to my server?

And what`s the benefit of this dedicated remote port?

The best way to RAID 5 four 146GB SAS drive?

  • The 4x 1GB card is used for several things. Also, that server has 2 on-board ports, for a total of six The Windows drivers at least allow for several things:

    • Port aggregation (if supported by your switch), which allows much higher bandwidth to the server. Though no single TCP connection will exceed 1GB, if you have several such streams they won't block.
    • VLAN trunking (if supported by your switch).
    • The ability to connect to 4 different subnets.
    • The option of being a dedicated iSCSI HBA.

    The dedicated remote port is for the Integrated Lights Out port. It allows you to perform remote power cycles, as well as the ability to mount DVD media remotely. If you paid for iLO Advanced, it allows you to have a graphical remote console just through the remote port. It's very useful.

    As for the storage, that depends on what you're planning to do with it.

  • First, why would you spend money on servers that you don't understand the capabilities of?!

    Yes, you can use HP's software to aggregate the links. If you switch supports LACP you can get 4Gbps bi-directional. Or you can get link redundancy. Or you can use each port individually. Or a combination of the above.

    The dedicated management port is primarily for environments where the primary NICs are sensitive to management traffic, or your switch/environment doesn't support vLANs. iLO can be accessed through either the dedicated management port; or it can share it's connection with the first on-board NIC using vLAN tagging.

    The best way to RAID 4 drives will depend on what you want from them. RAID10 is faster for some situations; RAID5 will give you more storage space.

    From Chris S