Web Components

Web Components are a set of standards currently being produced by Google engineers as a W3C specification that allow for the creation of reusable widgets or components in web documents and web applications. The intention behind them is to bring component-based software engineering to the World Wide Web. The components model allows for encapsulation and interoperability of individual HTML elements. Support for Web Components is present in Chrome and Opera and is in Firefox (with a manual configuration change). Microsoft's Internet Explorer has not implemented any Web Components specifications yet. Web Components consist of 4 main elements: Custom Elements, Shadow DOM, HTML Imports and HTML Templates.

Share tech news, updates, or what's on your mind.

Sign up to Post

some websites, change their links and then they forget to update the broken links.

for example the link in this page http://www.xldynamic.com/source/xld.XtraTime.html  refers to http://xldynamic.com/cgi-bin/counters/unicounter.pl?name=xld.XtraTime.dl&cache=0&deliver=http://www.xldynamic.com/downloads/xld.XtraTime.zip

which does not work and if i remove the some part of the link and only leave http://www.xldynamic.com/downloads/xld.XtraTime.zip

it will work and will download.

i was wondering if there is a easy way to find the list of all publicly available download links for a website.

i used to work with www.websitename.com/index.html long time ago, but that does not work anymore.

any idea how to get the list of download links for a website?
On Demand Webinar - Networking for the Cloud Era
On Demand Webinar - Networking for the Cloud Era

This webinar discusses:
-Common barriers companies experience when moving to the cloud
-How SD-WAN changes the way we look at networks
-Best practices customers should employ moving forward with cloud migration
-What happens behind the scenes of SteelConnect’s one-click button

Do you all know of any cheap web based inventory programs out there.  It would need to be able to

Enter orders,part qty
Receive part , qty
Stock  into location
pick from location
Hello Experts,
I am using JSTree plugin to display my Data.
But when I select the Child node, I wanted to get the Parent node Text on form submit.
I have tried below code.

$(document).on('click', '#btnSubmit', function () {
var parent = [];
        var selectedElms =$js('#IndustryTree').jstree("get_selected", true);
        $.each(selectedElms, function () {
            var node = $js('#IndustryTree').jstree(true).get_node(this.parent, true);            
            var Parentnode = $js('#IndustryTree').find("[id='" + this.parent + "']");

Open in new window

But When I run this code, in Parentnode[0].innerText I am getting Parent Node text as well as all Child node text. So it is difficult to differentiate the Parent Node Text.

Another issue is when Checked the Parent node, then all child nodes are selected. But I am not getting that selected parent node using below code

var selectedElms =$js('#IndustryTree').jstree("get_selected", true);

Parent Node selection

This code is working when I particularly select the Child node, in this case, parent node CSS class will be jstree-icon jstree-checkbox jstree-undetermined

Child Node selection is Working
Any help would be appreciated

I'm running ASP.NET page on IIS 8.5 (Windows Server 2012 R2) with IzillaFramework / Cognition CMS. Since this is a legacy system, there is no documentation on it and no one is able to assist me in the company, hence I'm posting it here.

Here it is chronologically:

At 12:14 PM, the first customer

When my customer visit the company public website ASP.NET web page to change the account details, after filling their details, the web page will send the PDF summary of the item that the user wanted to change (Before and After) CC: to my accounts department.

But somehow, after last week, the web form is no longer sending the PDF that is entered by the customer, but the email was sent but blank without attachment.

As at 12:14 PM Customer only get blank email with company header and no attachment, while the Accounts department gets the same email but contains the data from previous customer (N-1).

At 12:55 PM, second customer visit the same page to update their details, same thing happens.
Customer only get blank email with company header, the Accounts department gets the customer details as at 12:14 PM (The first customer data).

The second customer data is held somewhere / cached.

Here it is the error message that I can see in the error.log:

Friday, 3 March 2017 1:53:35 PM
System.IO.IOException: The process cannot access the 

Open in new window

How can a WCF Service return a file (perhaps in stream?) based on its file type?

The idea is to let the end user calling the web method and then to get the file downloaded to end user.

Thank you.
Hi expert

Need advise on both options below and quick analysis on cost, Advantages/Disadvantes of each and like to know any articles or reviews comparing the 2 services?



Alan lam
I'm looking to create a link to a URL that can update automatically using today's date. Reason is I want to monitor the pricing on my Airbnb listings without having to type in all the search dates every time. So I'd like a URL which is essentially as below but with dates that update to say for example checkin=  ** tomorrow ** &checkout= ** two days later **


Is there an easy way to do this, perhaps even an expression that the browser itself can parse.


PS I've added the javascript topic as I'm interested to learn this and if this is a viable method then this could be a good test
Hello Experts,
Is there any service that can be embeded in our site to get remarks from our customers and publish them ?
Only our customer wants to be able to select what goes to the site or not... so the widget has to let him filter remarks before publishing....
is there some service like this we can plug to the site ?
And what is the name ?
I wanted to inquire if anyone knows how PageViews get track with Google Analytics (web log, etc.).
I have a website that has login capability but based on the PageView counts, it appears that all visitors are tracked, not just logins.  Thanks for any info.
Good Afternoon,

Is anyone aware of a piece of software which may allow you to clone/index an entire website?  A potentially useful application for this could be used for business continuity.  We're looking for a way to spider our existing site, and then serve it up elsewhere.  There are some complexities behind the scenes with our site in regards to database access reads, but truth be told, the core content, while it exists in a DB, really could be served up as "static" content.  The desirability of a solution as described could be ultra low BC/DR costs for such a product.

Is anyone aware of such a program/service?  Again, we are fully aware that you can simply copy a site to another location, but the site in question is not a simple flat site.

Free Tool: Site Down Detector
Free Tool: Site Down Detector

Helpful to verify reports of your own downtime, or to double check a downed website you are trying to access.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Hi Guys,

Issue: Two Url's (www.abc.com & www.abbc.com), we have created a new website and we would like to direct the traffic for older website to new website but if you come back to visit the older site again with in 24hrs it should take you to the older website until we fully launch the new website. is it possible?

Any help is much appreciated.
I dont under stand what is happening here with running my Gruntfile.js I am run at from a MACOS and Windows server 2012 both gave me exact error

This is my Gruntfile.js
'use strict';

module.exports = function (grunt) {

// Time how long tasks take. Can help when optimizing build times

// Automatically load required Grunt tasks
require('jit-grunt')(grunt, {
  useminPrepare: 'grunt-usemin'

    // Define the configuration for all the tasks
      pkg: grunt.file.readJSON('package.json'),

        // Make sure code styles are up to par and there are no obvious mistakes
  jshint: {
    options: {
      jshintrc: '.jshintrc',
      reporter: require('jshint-stylish')
    all: {
      src: [

  useminPrepare: {
      html: 'app/menu.html',

      useminPrepare: {
          html: 'app/menu.html',
          options: {
              dest: 'dist'

      // Concat
      concat: {
          options: {
              separator: ';'

          // dist configuration is provided by useminPrepare
          dist: {}

      // Uglify
      uglify: {
          // dist configuration is provided by useminPrepare
          dist: {}

      cssmin: {
          dist: {}

      // Filerev
      filerev: {
          options: {
              encoding: 'utf8',
              algorithm: 'md5',

Open in new window

I am working with a website provided by the city government which deals with architectural plans and drawings, using a package called ProjectDox. When I go through the various panels and try to upload a "drawing", I'm presented with a Windows Explorer window to select the folder & file I want to upload. The problem is that the allowable file types do not include pdf files which are supposed to be included. The system admins insist that it's something wrong on my machine since no one else is having this problem, but I've tried 3 different machines all with the same result.

So, my question is: when a website opens an upload window on your machine with specific file type restrictions, where does this list of file types come from? I'm having a really hard time believing that it's coming from anywhere else other than the web server. Can there be something on my local machine that would override the list provided by the web server?

I'm using IE11 (required by this website) with Silverlight (also required). Have tried on Windows 10 and 8.1.


Harry Zisko
I need to find a combination of Python, Selenium and Browser version that works best on Windows 7.  I have tried Python 3.5.2, Selenium 2.53.6 and various versions of Firefox but none of them will do what I need done.

I just need to select all of a page in front of me that is a cms page so it isn't showing the html tags or body tags.  The source code isn't a solution either because it is a cms page it leaves out just the content in the window that selenium navigates to. I have tried xpath and that is why I am thinking I need to make sure they all 3 work together well because it shows the following error.

(ff2-32) C:\Users\Randal J. Watkins\ff2>python expertsbrazil_clean.py
Traceback (most recent call last):
  File "expertsbrazil_clean.py", line 82, in <module>
    button = driver.wait.until(EC.visibility_of_element_located((By.CLASS_NAME,
  File "C:\Users\RANDAL~1.WAT\Envs\ff2-32\lib\site-packages\selenium\webdriver\s
upport\wait.py", line 80, in until
    raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
    at FirefoxDriver.prototype.findElementInternal_ (file:///C:/Users/RANDAL~1.W
    at FirefoxDriver.prototype.findElement (file:///C:/Users/RANDAL~1.WAT/AppDat

Open in new window

I've been asked by my CEO to research a e-commerce website that is selling in our industry.

He would like to know things like,  volume of visitors per month,  what language is the site constructed with,  and any other technical information I can get out of the website to get an idea of how successful the site is.

I really have no idea where to start on this, because obviously I am not the site administrator.  This is the competitors website!

Please suggest anything that might help get some stats from a site.

I'm a desktop support student, currently stumped at this user's issue, any advice would be fantastic:

User is developing an app with another user, both users are working on it internally but is being tested externally on various browsers.
One part of the app involves sending the contents of a form to a server (done by button click), this works fine on first user's Firefox but not on the other user's.

Both have the latest version of Firefox and have the same plugs ins enabled/disables (I think all are disabled)


is there any sort of terminology used by web developers to cover how you keep content current - i.e. remove any old irrelevant content, update with new relevant content, update and maintain links etc.

We have a number of websites developed by 3rd parties and are interested in auditing there web maintenance processes, but I am unsure of the exact wording for this activity - or whether there are any frameworks/best practices checklists web dev teams work towards to keep their sites up to date and current, that we could use as a checklist to see how well they are doing so. If such frameworks and checklists do exist - the details would help no end.
Does anyone know of any automated way of auditing a website against the WCAG (Web Content Accessibility Guidelines)? or would you suggest such as review would be more practical using manual analysis?

I am looking for plugin which show as headline that website is currently is in maintenance mode , so you may look around but it might has an issue while you are view or looking to website.

please suggestions or ideas are welcome.
[Webinar] How Hackers Steal Your Credentials
[Webinar] How Hackers Steal Your Credentials

Do You Know How Hackers Steal Your Credentials? Join us and Skyport Systems to learn how hackers steal your credentials and why Active Directory must be secure to stop them. Thursday, July 13, 2017 10:00 A.M. PDT

Do these 2 lines

Open in new window

comply to the format below?

Does anyone have any 'real' sample file showing also such line to the same standard?
I am trying to search for updates on a specific website that does not have a search function. Is there a third-party tool or scanner that can do this for me on a regular basis and then alert me to my phone or email?
I'm building a 3D model in Sketchup, considering buying the Pro which can output more formats. I would like to send a link to about ten people, so that they can look around the model. What are the best ways to do this? I'd like to NOT have to install plugins. If there's no way around, I would like to create 360 images (so that they can look around at predefined points in the 3d model). One 360 image per predefined point, preferrably high resolution, so they are able to zoom in digitally.
Any ideas?
How do you get the recent logs from websphere from the CLI?

I am trying to create images for a website that will allow the user to drag images from one column to a Drawing Board to build their Workstation, etc.  

1) What is the best way to find images for various components such as Monitors, etc?
2) Is there a way to build layers on an image that have a Name that can be used in code for the "parts" that make up a Monitor such as USB Port?

Thanks for any tips!
I'm working on a tool for our CAD department to manage requests. It's a ticketing system much like a typical helpdesk application. As this system is growing the requests for it to do more and more things is also growing and some of the requests are difficult to pull off as a web app. I know that there are security concerns with having a webpage execute actions on a local PC but this is only used inhouse.

My attempted action is to be able to open an explorer window on the client. For some of our customers we store part files in specific known locations out on the network. To speed up the process they would like to be able to click a button and open an explorer window already at this known location. I know I could use the file:/// option but newer versions of IE block this by default and even when enabled the behavior is jerky and can result in unwanted navigation.

What I'd like is to be able to create some sort of add-on/plugin that can be invoked from javascript, can be passed the folderlocation and it opens. I have messed with creating an ActiveX Com object and it works but the downside is you can't install this on a per user basis ( no publishing option). In fact in VS2015 I can't find a way to create a distribution at all for Class Libraries.

I know that something like this must be possible because of sites like Log Me In. Is there a better way to do this than an ActiveX control and if not what is the best way to distribute one. It would be really nice if the …

Web Components

Web Components are a set of standards currently being produced by Google engineers as a W3C specification that allow for the creation of reusable widgets or components in web documents and web applications. The intention behind them is to bring component-based software engineering to the World Wide Web. The components model allows for encapsulation and interoperability of individual HTML elements. Support for Web Components is present in Chrome and Opera and is in Firefox (with a manual configuration change). Microsoft's Internet Explorer has not implemented any Web Components specifications yet. Web Components consist of 4 main elements: Custom Elements, Shadow DOM, HTML Imports and HTML Templates.

Top Experts In
Web Components