RHEL 6.6 - Unable to ssh

Hi,

I have create a new VM with RHEL 6.6 but I cant login using Secure CRT (or any other ssh client) because it looks like the 'password' authentication method is not enabled somehow.

This is the Trace Options from the Secure CTR:
[LOCAL] : SENT : USERAUTH_REQUEST [none]
[LOCAL] : RECV : SSH_MSG_USERAUTH_BANNER
[LOCAL] : RECV : USERAUTH_FAILURE, continuations [publickey,keyboard-interactive]
[LOCAL] : SEND: Disconnect packet: Unable to authenticate using any of the configured authentication methods.  
[LOCAL] : Changing state from STATE_CONNECTION to STATE_SEND_DISCONNECT
[LOCAL] : RECV: TCP/IP close
[LOCAL] : Changing state from STATE_SEND_DISCONNECT to STATE_CLOSED
[LOCAL] : Connected for 0 seconds, 891 bytes sent, 1233 bytes received

The client has disconnected from the server.  Reason:
Unable to authenticate using any of the configured authentication methods.

===

On the server side, I have the sshd_config configured properly with all the Passwordauthentication as yes (whole file attached).

Im not sure what can be the problem.... I checked the PAM config as well and I couldnt find anything there.... (its the default configuration).

I can reach the server without problems... so the issue is really the auth. method.. maybe there is something else to be done for RHEL6.6?

PS: I have restarted the sshd service several times.... no new result..

Tks,
JT
sshd_config.txt
joaotellesAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Sudeep SharmaTechnical DesignerCommented:
How about the SE  Linux?

Did you configured it too or disabled it to test?

And also check the firewall, just in case.

Sudeep
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
joaotellesAuthor Commented:
SElinux is disabled.


 
 Zurück      
Oracle 11 RAC Survival Guide

[Deleted some info that could be misused. rindi, EE Topic Advisor]

Used  Software      Oracle Enterprise Linux 5.0
       Oracle 11g Release 1 (11.1) Clusterware and Database Software
        
Dokumentation      Real Application Clusters Administration and Deployment Guide
       Real Application Clusters Installation Guide for Linux and UNIX
        
Content

Overview
Architecture
Enterprise Linux Installation and Setup
Create Accounts
NFS Configuration
Enabling SSH User Equivalency
Install Oracle Clusterware
Install Oracle Database Software
Create Listener Configuration
Create the Cluster Database
Transparent Application Failover (TAF)
Facts Sheet RAC
Troubles during the Installation

Overview

In the past, it was not easy to become familiar with Oracle Real Application Clusters (RAC), due to the price of the hardware required for a typical production RAC configuration which makes this goal impossible.

Shared storage file systems, or even cluster file systems (e.g. OCFS2) are primarily used in a storage area network where all nodes directly access the storage on the shared file system. This makes it possible for nodes to fail without affecting access to the file system from the other nodes. Shared disk file systems are normally used in a high-availability cluster.

At the heart of Oracle RAC is a shared disk subsystem. All nodes in the cluster must be able to access all of the data, redo log files, control files and parameter files for all nodes in the cluster. The data disks must be globally available to allow all nodes to access the database. Each node has its own redo log and control files but the other nodes must be able to access them in order to recover that node in the event of a system failure.

Architecture

The following RAC Architecture should only be used for test environments.

For our RAC test environment, we use a normal linux server, acting as a shared storage server using NFS. We can use NFS to provide shared storage for a RAC installation. NFS is an abbreviation of Network File System, a platform independent technology created by Sun Microsystems that allows shared access to files stored on computers via an interface called the Virtual File System (VFS) that runs on top of TCP/IP.

 

Network Configuration

Each node must have one static IP address for the public network and one static IP address for the private cluster interconnect. The private interconnect should only be used by Oracle. Note that the /etc/hosts settings are the same for both nodes Gentic and Cellar.

Host Gentic

Device      IP Address      Subnet      Gateway      Purpose
eth0      192.168.138.35      255.255.255.0      192.168.138.1      Connects Gentic to the public network
eth1      192.168.137.35      255.255.255.0             Connects Gentic to Cellar (private)
/etc/hosts
127.0.0.1               localhost.localdomain           localhost
#
# Public Network - (eth0)
192.168.138.35          gentic
192.168.138.36          cellar

# Private Interconnect - (eth1)
192.168.137.35          gentic-priv
192.168.137.36          cellar-priv

# Public Virtual IP (VIP) addresses for - (eth0)
192.168.138.130         gentic-vip
192.168.138.131         cellar-vip

Host Cellar

Device      IP Address      Subnet      Gateway      Purpose
eth0      192.168.138.36      255.255.255.0      192.168.138.1      Connects Cellar to the public network
eth1      192.168.137.36      255.255.255.0             Connects Cellar to Gentic (private)
/etc/hosts
127.0.0.1               localhost.localdomain           localhost
#
# Public Network - (eth0)
192.168.138.35          gentic
192.168.138.36          cellar

# Private Interconnect - (eth1)
192.168.137.35          gentic-priv
192.168.137.36          cellar-priv

# Public Virtual IP (VIP) addresses for - (eth0)
192.168.138.130         gentic-vip
192.168.138.131         cellar-vip

Note that the virtual IP addresses only need to be defined in the /etc/hosts file (or your DNS) for both nodes. The public virtual IP addresses will be configured automatically by Oracle when you run the Oracle Universal Installer, which starts Oracle's Virtual Internet Protocol Configuration Assistant (VIPCA). All virtual IP addresses will be activated when the srvctl start nodeapps -n <node_name> command is run. This is the Host Name/IP Address that will be configured in the client(s) tnsnames.ora file.

About IP Addresses

Virtual IP address      A public internet protocol (IP) address for each node, to be used as the Virtual IP address (VIP) for client connections. If a node fails, then Oracle Clusterware fails over the VIP address to an available node. This address should be in the /etc/hosts file on any node. The VIP should not be in use at the time of the installation, because this is an IP address that Oracle Clusterware manages.
When Automatically Failover occurs, two things happen:

The new node re-arps the world indicating a new MAC address for the address. For directly connected clients, this usually causes them to see errors on their connections to the old address.
 
Subsequent packets sent to the VIP go to the new node, which will send error RST packets back to the clients. This results in the clients getting errors immediately.
This means that when the client issues SQL to the node that is now down, or traverses the address list while connecting, rather than waiting on a very long TCP/IP time-out (~10 minutes), the client receives a TCP reset. In the case of SQL, this is ORA-3113. In the case of connect, the next address in tnsnames is used.

Going one step further is making use of Transparent Application Failover (TAF). With TAF successfully configured, it is possible to completely avoid ORA-3113 errors alltogether.

Public IP address      The public IP address name must be resolvable to the hostname. You can register both the public IP and the VIP address with the DNS. If you do not have a DNS, then you must make sure that both public IP addresses are in the node /etc/hosts file (for all cluster nodes)
Private IP address      
A private IP address for each node serves as the private interconnect address for internode cluster communication only. The following must be true for each private IP address:

- It must be separate from the public network
- It must be accessible on the same network interface on each node
- It must be connected to a network switch between the nodes for the
   private network; crosscable interconnects are not supported

The private interconnect is used for internode communication by both Oracle Clusterware and Oracle RAC. The private IP address must be available in each node's /etc/hosts file.

Enterprise Linux Installation and Setup

We use Oracle Enterprise Linux 5.0. A general pictorial guide to the operating system installation can be found here. More specifically, it should be a server installation with a minimum of 2G swap, firewall and secure Linux disabled. We have installed everything for our test environment, Oracle recommends a default server installation.

Disable SELINUX (on both Nodes)

/etc/sysconfig/selinux
# This file controls the state of SELinux on the system.
SELINUX=disabled

The firewall is disabled as well on the server side... and there is no other firewall between ssh client and server....
root> /etc/rc.d/init.d/iptables stop
root> chkconfig iptables off

Tks,
JT
0
Gerwin Jansen, EE MVETopic Advisor Commented:
Try setting PAM to No - is there a way for you to log on? You say you have restarted sshd a few times so you can log on, right?
0
Ultimate Tool Kit for Technology Solution Provider

Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy now.

rindiCommented:
What hypervisor are you running as host? are you trying to connect using a PC on your main LAN? If so have you made sure you are using "Bridged" mode for the VM's NIC? Have you also installed the VMWare Tools or whatever similar options are available by your hypervisor?
0
joaotellesAuthor Commented:
VMware tools are installed.

Im using VMware 5.5 (I have other VMs that are on the same network and they work, so this points out to something on the server).

I tried set PAM to no and stayed the same.. the impression I have is that the changes on the sshd_config are not being effective .. not even with a server boot...

And yes I can connect using the console of the vSphere web client.

Tks,
JT
0
joaotellesAuthor Commented:
Other weird thing is that the pubkey auth. is disabled (as per the sshd_config file attached).

But still it shows up here in the Trace Options log:
[LOCAL] : RECV : USERAUTH_FAILURE, continuations [publickey,keyboard-interactive]

So, I dont know why its there...

Tks,
JT
0
Gerwin Jansen, EE MVETopic Advisor Commented:
Try connecting with vvv options:

ssh - vvv host

Then post the log here.
0
joaotellesAuthor Commented:
Sorry for the long reply.... there was a network issue and the message form the notepad++ was misleading (firewall issues).

All the troubleshooting tips were useful.
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
SSH / Telnet Software

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.