Improve company productivity with a Business Account.Sign Up

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 505
  • Last Modified:

configuration /security issues of Websites

Hi,

I am facing an interesting issue. In my enterprise, I am resposible for Network Security - Firewalls, IPS, HIPS, Proxy etc. Suppose some web site is built in Sharepoint or ASP and has development issues: not having input parameter validation, credentials stored in clear text, permissions not assigned properly for pages that need not to be served to anonymous users.  This can happen mainly because developers look at the functionality of the application, and do not carefully consider how to secure them.

My problem is who should be responsible for this at the policy level - the application developer or network security team. Can I prevent  these issues with Network IPS or Host IPS running on the web servers.

What are the actual solutions for these type of issues?

I would appreciate your thoughts.
0
anishpeter
Asked:
anishpeter
  • 5
  • 3
  • 2
  • +2
4 Solutions
 
SandyCommented:
first responsibility to get the VAPT report for every site to understood what all the vulnerabilities are there. They only you can workout something.

I suggest to try Metasploit and Nexpose for free VAPT reports.

give a try
0
 
anishpeterAuthor Commented:
Hello Sandeep,
I understand it is the quick start. What is VAPT? Is it safe to do it with Metasploit/Nexpose. Whcih one is suitable for me since I cant determine which is safe/unsafe tests.
Please try to fill a thought foir my rest of case
0
 
SandyCommented:
VAPT is Vulnerability Assessment and Penetration testing !!!

Metasploit with Nexpose is great tool to get such reports.
0
Get 10% Off Your First Squarespace Website

Ready to showcase your work, publish content or promote your business online? With Squarespace’s award-winning templates and 24/7 customer service, getting started is simple. Head to Squarespace.com and use offer code ‘EXPERTS’ to get 10% off your first purchase.

 
anishpeterAuthor Commented:
But I checked the Rapid 7 website, but found free edition is not having Webapplication testing.
Please try to answer my rest of the scanario
0
 
SandyCommented:
install metasploit then download nexpose from the web. now you need to configure Nexpose shell within Metasploit.
0
 
David Johnson, CD, MVPOwnerCommented:
Ok. Now problem is who can be responsible for this -  Application developer or Network Secuirity . Can I prevent  this issues with Network IPS or Host IPS running on Webservers.
Actual solution for this type of issues?


The ultimate responsibility is Network Security.  The App dev is responsible to code the app properly, before this app can be placed into production the network security team will test it and either accept it or reject it.. after it gets sent back a few times the app dev will get the idea or be replaced.
0
 
anishpeterAuthor Commented:
Hello ve3ofa,
This means Web  developer not at all responsible for placing credencials in plain text format in asp pages? Not setting proper permissions in IIS? or paramer validation is not enabled for an input for uploading photos in a page, accepting  scripts?
But Network security team in organisation is responsible for Firewalls,IPS, proxy and Network level Access Security. How can they able to find the issues of the code/web/application
 More comments...
0
 
tigermattCommented:
The actual responsibility will depend on the departments within your organisation, and their specific roles as dictated by senior management in terms of responsibility to the organisation as a whole, and responsible over the specific application(s) in question. There are many different people who must come together here to ensure the applications are secure, and the responsibility can only be apportioned as per the methods used within the organisation to delegate it.

In reality, the security of the application affects the entire business, as the systems are most likely critical to delivering business change or development. This lends itself to being a corporate-level responsibility - that of the Chief Information Officer (CIO). This is likely to be delegated to other people who are more involved with the day-to-day design and operations, but the buck would typically stop at the top.

When the concepts are being formalised of a new or improved system to effect business change, there will typically be architects or analysts who look at the high-level requirements to define the features and benefits that such a new system is expected to bring to the organisation. These people should consider security as one of the requirements of any system when they specify it; there would be massive ramifications to a business which does not consider security as their number one priority, and is then 'hacked' and loses data as a result.

In addition, there may be a team which manages IT or Corporate risk. Financial institutions typically have a risk management unit. The systems are dealing with corporate data which should not be exposed, so the risk unit would be proactive in analysing existing systems and reviewing new system designs to determine the extent to which they are secure, whether they conform to compliance requirements for your industry, etc.

A tender to develop an application should describe all the features required of it. A developer would be expected to have the nous to develop a solution which adheres to industry security practices, but you cannot expect this unless you stipulate up-front and check afterwards that this part of the contract was implemented properly.  The developer should be using industry standard methods, normal cryptographic algorithms and so on where it is applicable; they should certainly not be developing their own security methods, which have not been subject to peer or public review.

Additionally, you might employ an ethical hacker to actually test the system before or after it is put live. These people work for you (they are 'white hat' hackers) and deliberately probe the system to attempt to discover security flaws and potential methods to gain unauthorised access.

But, this is entirely a management requirement. You are not going to be able to detect poor input sanitisation or determine how credentials are stored in a back-end data store with an intrusion detection system (IDS). These are looking for attacks at the network layer; if you have allowed web server traffic to enter a web server and access the website, then as far as an IDS is concerned, that is legitimate traffic. It may apply further filters to inbound web traffic to ensure it is legitimate, but it isn't going to examine proprietary back-end data stores or understand custom database schemas to determine if passwords are stored securely. That's something for a human being to decide. Similarly, how do you intend to have an IDS check whether permissions are set properly for restricted pages? The IDS is not a human, it is a computer which applies algorithms to data flow on a network to weigh up whether a session is legitimate or an intrusion; it doesn't know whether a particular web server page should be open to the public or not.

-Matt
0
 
anishpeterAuthor Commented:
Hello Matt,
I completely agree with the comments and Now a days developers  are not much considering to adhere to simple security practices and they are rather concetrate on the functional part.
Here the main problem is of Accoutability , not of responsibility. Some times CIO wants to distribute the accoutabilty also Engineer Level who already have responsibility of the system. I can understand it is not a  good practice, but is happening.. thoughts..
0
 
David Johnson, CD, MVPOwnerCommented:
Your Comment:
This means Web  developer not at all responsible for placing credentials in plain text format in asp pages? Not setting proper permissions in IIS? or parameter validation is not enabled for an input for uploading photos in a page, accepting  scripts?

My Previous Comment:
before this app can be placed into production the network security team will test it and either accept it or reject it.. after it gets sent back a few times the app dev will get the idea or be replaced.

If the app dev does the above then they were not programming the application as defined in the contract of services. then the app gets sent back to the developer.

whomsoever that ultimately signs off the application and puts it into production has the ultimate responsibility, but as we well know in business, that things roll downhill.

It is the coders responsibility to use best practices, and the acceptors responsibility to ensure that these best practices were in fact followed.  This always keep penetration testers (pen testes) busy.  A normal testing scenario is to try and break the app
0
 
tigermattCommented:
I completely agree with ve3ofa. Unfortunately, there are scenarios where it isn't possible to solve (or at least, it's non-trivial to solve) with a technical solution. It is all dependent on how the project is managed in the first place. Poor management will probably lead to poor performance, poor security or poor results. Doesn't really matter what you do later to try to fix it. If the problems are coded into the app, then the only sensible thing is to go back to the developer and have them fixed in the source. You can try to work around a security fault in the interim, but even that's not easy. For example, does closing the port in the firewall work these days? Probably not. Attacks are on the up, especially from within rather than outside the organisation.

The point here is to make sure you have evidence to indicate you have performed due checks of sensitive or mission critical systems. If it isn't your place to sign them off as functioning as designed, then ensure the person whose it is has done so, and you are happy this is legitimate. If your concern is about avoiding blame, then make sure you keep an audit trail to defend yourself in the event that is necessary.

Training is important too. Training your staff to recognise and avoid social engineering attacks, bogus emails trying to get them to open malware, or the mantra that when an offer is too good to be true, it probably is. Technical people have a much greater degree of caution and an uncanny way of smelling a rat in a lot of cases, whereas most "users" of the systems we create don't always have that same confidence to ignore the email from their "bank" or the attention to detail to spot the minor differences in a legitimate vs. bogus site. And they shouldn't be expected to beyond what's necessary to do their job. Their interests lie elsewhere. Sadly, for a technical person with a lot of experience, it can be very hard - sometimes impossible - to break that mould and think like a person with less technical experience will react to the procedures we put into place.

I once came across a user who was using a site which did not have a trusted SSL certificate. It was an internal site at the company they worked for. They had been told any time they saw the warning to say "yes" to the warnings that the site was not trusted and enter their username and password as normal. They translated this as any site with an untrusted certificate and any login prompt they saw. No security breach occurred, but they had inadvertently entered their username and password into several web locations they probably shouldn't have done. The moral of the story? Don't train your users into bad habits. Keep the SSL certificate up-to-date. You don't have to teach them to bypass a warning screen they should be scared by, so when they do see it, it isn't normal, doesn't trigger a reaction to just hit continue, and they back out.
0
 
Mark WillsTopic AdvisorCommented:
Who is responsible at the Policy Level ??

That depends on your policy's and is they clearly state "security" then your development team has breached policy.

So, they are most certainly "accountable"

But at the end of the day it is Netwrok Security who is responsible, and if the development team didnt know or wasnt clearly stated in Policy then can't really be held accountable.

I would say there has been a breakdown in the design document in the first place - whoever approved that did not consider all aspects of design.

So, your "applications architect" has failed to ensure all aspects of policy were being adhered to and deservedly should accept "blame".

So, there you have it - someone accountable, someone responsible and someone to blame.

But, best of all you have the ideal situation to rise above and point out a loophole in security and use this as a test case to upgrade "policy".
0
 
anishpeterAuthor Commented:
All the concerns addressed.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Worried about phishing attacks?

90% of attacks start with a phish. It’s critical that IT admins and MSSPs have the right security in place to protect their end users from these phishing attacks. Check out our latest feature brief for tips and tricks to keep your employees off a hackers line!

  • 5
  • 3
  • 2
  • +2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now