Web Components

Web Components are a set of standards currently being produced by Google engineers as a W3C specification that allow for the creation of reusable widgets or components in web documents and web applications. The intention behind them is to bring component-based software engineering to the World Wide Web. The components model allows for encapsulation and interoperability of individual HTML elements. Support for Web Components is present in Chrome and Opera and is in Firefox (with a manual configuration change). Microsoft's Internet Explorer has not implemented any Web Components specifications yet. Web Components consist of 4 main elements: Custom Elements, Shadow DOM, HTML Imports and HTML Templates.

Share tech news, updates, or what's on your mind.

Sign up to Post

I just read the Bootstrap 4 is finally in beta. Is it okay to use Bootstrap 4 in productions?
Free Tool: SSL Checker
Free Tool: SSL Checker

Scans your site and returns information about your SSL implementation and certificate. Helpful for debugging and validating your SSL configuration.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

I need to set up a simple site that is mainly used for posting articles and links but would need to be able to set up ads, comments areas for the articles and possible a forum add in eventually. Don't have web design experience but familiar with how to lay a site out and what needs to be done.

What suggestions do you have that would be a good service that has site building tools for non-developers and does not cost a lot but allows some of the features we need? Probably not much traffic up front so something just to get started that can be expanded later is all we need.
some websites, change their links and then they forget to update the broken links.

for example the link in this page http://www.xldynamic.com/source/xld.XtraTime.html  refers to http://xldynamic.com/cgi-bin/counters/unicounter.pl?name=xld.XtraTime.dl&cache=0&deliver=http://www.xldynamic.com/downloads/xld.XtraTime.zip

which does not work and if i remove the some part of the link and only leave http://www.xldynamic.com/downloads/xld.XtraTime.zip

it will work and will download.

i was wondering if there is a easy way to find the list of all publicly available download links for a website.

i used to work with www.websitename.com/index.html long time ago, but that does not work anymore.

any idea how to get the list of download links for a website?
Dear Sir/Mam,
I Am  Using  KVM Vps Plan For My Domain ,Earlier I Was using Hatching plan for my domain, Upto 10th March 2017 My Website Traffic Was Around 2000Visitors Per Day,But After Changing My Server From Hatching To VPS There Was A Huge Downfall In My Traffic,initially i thought it is due to server change then i keep uploading fresh contents everyday and also resubmit my site to the google but nothing happens even after continuous uploads my traffic stuck around 600 visitors/day.I give My 100% To Solve This But Failed !!! I Talked To Hosting Provider Help Team Regarding This But Nothing Happens.
Sir, Please Help Me Regarding This What should i Do To Get My Traffic Back.

Please help Sir!!!
Thanks In Advance!!
When I try to automate the downloading of a file from a HTTPS website using XMLHTTP protocol by EXCEL VBA code shown below, the login credentials in my VBA code are always ignored. The downloaded file test.txt only contains one line of error message that says "Your session has expired. Please login and download the file again."  

I have done the following experiments:
1). If I manually log in to the website using the username and password, I can log in without any problems. So the login credentials are definitely correct.
2). If I manually log in to the website, then run the VBA macro, I still get the same error in the downloaded file test.txt.
3). If I log in manually on the website, and manually paste the file url in Internet Explorer address bar and hit Enter key, the file can be successfully downloaded with the correct content in it.
4). If I log out from the website, then manually paste the file url in Internet Explorer address bar and hit Enter key,  I get the same error message in the downloaded file DownloadError.txt in my Downloads folder.

Please advise,

Sub DownloadFile()

    Dim oStrm As Object
    Dim HttpReq As Object
    Dim myURL As String
    myURL = "https://www.spice-indices.com/idp/data-center/file-download/S9bFdHFp-zmvr2g1QPMXfIRR0G0AOYB9MO_3R3VKI5nzhvCTU0qGu7rr0d9EDDvDIqVHbBaIGCqW1nxGVe-K_aSTm5gBj1XtGNH_cWI9X0jjXJLPJmGEnlX3n1vfzW0AHny7HJXPgv5PkYUmaUOHQg_hK1Cqf--KnMQWx_RVrIEhsP5ajtN9Qc6IbncFC-LM"

    Set HttpReq = …
Hello... just wondering if there is an easy-to-use (and ideally free!) module to display videos on a website... similar to VideoLightbox. Strangely enough, VideoLightbox worked fine for me for awhile and then all of a sudden just stopped working a few weeks ago and I had  never changed a thing. Anyhow, an alternative would be nice...

Do you all know of any cheap web based inventory programs out there.  It would need to be able to

Enter orders,part qty
Receive part , qty
Stock  into location
pick from location
Hello Experts,
I am using JSTree plugin to display my Data.
But when I select the Child node, I wanted to get the Parent node Text on form submit.
I have tried below code.

$(document).on('click', '#btnSubmit', function () {
var parent = [];
        var selectedElms =$js('#IndustryTree').jstree("get_selected", true);
        $.each(selectedElms, function () {
            var node = $js('#IndustryTree').jstree(true).get_node(this.parent, true);            
            var Parentnode = $js('#IndustryTree').find("[id='" + this.parent + "']");

Open in new window

But When I run this code, in Parentnode[0].innerText I am getting Parent Node text as well as all Child node text. So it is difficult to differentiate the Parent Node Text.

Another issue is when Checked the Parent node, then all child nodes are selected. But I am not getting that selected parent node using below code

var selectedElms =$js('#IndustryTree').jstree("get_selected", true);

Parent Node selection

This code is working when I particularly select the Child node, in this case, parent node CSS class will be jstree-icon jstree-checkbox jstree-undetermined

Child Node selection is Working
Any help would be appreciated

I'm running ASP.NET page on IIS 8.5 (Windows Server 2012 R2) with IzillaFramework / Cognition CMS. Since this is a legacy system, there is no documentation on it and no one is able to assist me in the company, hence I'm posting it here.

Here it is chronologically:

At 12:14 PM, the first customer

When my customer visit the company public website ASP.NET web page to change the account details, after filling their details, the web page will send the PDF summary of the item that the user wanted to change (Before and After) CC: to my accounts department.

But somehow, after last week, the web form is no longer sending the PDF that is entered by the customer, but the email was sent but blank without attachment.

As at 12:14 PM Customer only get blank email with company header and no attachment, while the Accounts department gets the same email but contains the data from previous customer (N-1).

At 12:55 PM, second customer visit the same page to update their details, same thing happens.
Customer only get blank email with company header, the Accounts department gets the customer details as at 12:14 PM (The first customer data).

The second customer data is held somewhere / cached.

Here it is the error message that I can see in the error.log:

Friday, 3 March 2017 1:53:35 PM
System.IO.IOException: The process cannot access the 

Open in new window

can you please provide a comparison among Datadog, Agentless Monitor, and Nagios?
Free Tool: Subnet Calculator
Free Tool: Subnet Calculator

The subnet calculator helps you design networks by taking an IP address and network mask and returning information such as network, broadcast address, and host range.

One of a set of tools we're offering as a way of saying thank you for being a part of the community.

How can a WCF Service return a file (perhaps in stream?) based on its file type?

The idea is to let the end user calling the web method and then to get the file downloaded to end user.

Thank you.
looking for an  in depth tutorial for web server and client response using async websockets.
in VB,Net preferably
Hello all,
    I am looking for either a wordpress, MyBB Forum, or standalone PHP application / plugin that I can use on my site (which uses the aforementioned applications) that will allow logged in users to upload their own files, and let people download them. The more options or controls the author of the upload has the better, but just something to allow them to upload and categorize the file, add screenshots or images, a description, etc, and let the users view and search by categories and download the files. The option to download with an account or without would be nice as well.

Can anyone recommend anything that might work for this?

Hi expert

Need advise on both options below and quick analysis on cost, Advantages/Disadvantes of each and like to know any articles or reviews comparing the 2 services?



Alan lam
I'm looking to create a link to a URL that can update automatically using today's date. Reason is I want to monitor the pricing on my Airbnb listings without having to type in all the search dates every time. So I'd like a URL which is essentially as below but with dates that update to say for example checkin=  ** tomorrow ** &checkout= ** two days later **


Is there an easy way to do this, perhaps even an expression that the browser itself can parse.


PS I've added the javascript topic as I'm interested to learn this and if this is a viable method then this could be a good test
A vendor has supplied the following cURL command which works to upload a file to their site: (of course, I've obfuscated the API key, site, and my username, which works fine from Windows with cURL installed:

curl -u user@foo.com:{API KEY} -F "data=@1239705.jpg" https://foo.com/path/subpath

My reading explains to me that the -F emulates a form, so I would think that 'data' would be an arbitrary form variable name which they have chosen?  I also understand that the @ tells cURL to read the file from the current sub directory on the computer running  and supply that file as the input to the variable on the form (and maybe the file name--the vendor does use the same name as the local file when the command works).  Most of my work with various Web Requests have either been completely coded into the URL (mostly as POST) or true SOAP web services.  I've tried to piece the above command together several different ways with VB, but the only result I get is 500 (Internal Server Error).  Note that it is a jpg file, so maybe I simply need to provide/change encoding?  When I run it from the Windows command line, the result is:


From my research, I'm not even completely sure whether to use WebRequest, HTTPRequest, HTTPWebRequest, WebClient, or ???  Most of my experience is with WebRequest with a NetCredential.

Is it correct that user@foo.com would be the user in my NetworkCredential and {API KEY} would be the password?

Thanks in advance!
Hello Experts,
Is there any service that can be embeded in our site to get remarks from our customers and publish them ?
Only our customer wants to be able to select what goes to the site or not... so the widget has to let him filter remarks before publishing....
is there some service like this we can plug to the site ?
And what is the name ?
I need to start an ETL job configured in Autosys from an external java application. I heard that there is Rest End point to do it. has anyone implemented this, can you share sample code or links? this would be very helpful.

I need to start an ETL job - infomatica job configured in AUTOSYS. This has to be done programmatically by consuming REST ENDPOINT of AUTOSYS from a Java application. Any help is highly appreciated.
I wanted to inquire if anyone knows how PageViews get track with Google Analytics (web log, etc.).
I have a website that has login capability but based on the PageView counts, it appears that all visitors are tracked, not just logins.  Thanks for any info.
Introducing Priority Question
Introducing Priority Question

Increase expert visibility of your issues by participating in Priority Question, our latest feature for Premium and Team Account holders. Adjust the priority of your question to get emergent issues in front of subject-matter experts for help when you need it most.

Good Afternoon,

Is anyone aware of a piece of software which may allow you to clone/index an entire website?  A potentially useful application for this could be used for business continuity.  We're looking for a way to spider our existing site, and then serve it up elsewhere.  There are some complexities behind the scenes with our site in regards to database access reads, but truth be told, the core content, while it exists in a DB, really could be served up as "static" content.  The desirability of a solution as described could be ultra low BC/DR costs for such a product.

Is anyone aware of such a program/service?  Again, we are fully aware that you can simply copy a site to another location, but the site in question is not a simple flat site.

Hi Guys,

Issue: Two Url's (www.abc.com & www.abbc.com), we have created a new website and we would like to direct the traffic for older website to new website but if you come back to visit the older site again with in 24hrs it should take you to the older website until we fully launch the new website. is it possible?

Any help is much appreciated.
I dont under stand what is happening here with running my Gruntfile.js I am run at from a MACOS and Windows server 2012 both gave me exact error

This is my Gruntfile.js
'use strict';

module.exports = function (grunt) {

// Time how long tasks take. Can help when optimizing build times

// Automatically load required Grunt tasks
require('jit-grunt')(grunt, {
  useminPrepare: 'grunt-usemin'

    // Define the configuration for all the tasks
      pkg: grunt.file.readJSON('package.json'),

        // Make sure code styles are up to par and there are no obvious mistakes
  jshint: {
    options: {
      jshintrc: '.jshintrc',
      reporter: require('jshint-stylish')
    all: {
      src: [

  useminPrepare: {
      html: 'app/menu.html',

      useminPrepare: {
          html: 'app/menu.html',
          options: {
              dest: 'dist'

      // Concat
      concat: {
          options: {
              separator: ';'

          // dist configuration is provided by useminPrepare
          dist: {}

      // Uglify
      uglify: {
          // dist configuration is provided by useminPrepare
          dist: {}

      cssmin: {
          dist: {}

      // Filerev
      filerev: {
          options: {
              encoding: 'utf8',
              algorithm: 'md5',

Open in new window

I am working with a website provided by the city government which deals with architectural plans and drawings, using a package called ProjectDox. When I go through the various panels and try to upload a "drawing", I'm presented with a Windows Explorer window to select the folder & file I want to upload. The problem is that the allowable file types do not include pdf files which are supposed to be included. The system admins insist that it's something wrong on my machine since no one else is having this problem, but I've tried 3 different machines all with the same result.

So, my question is: when a website opens an upload window on your machine with specific file type restrictions, where does this list of file types come from? I'm having a really hard time believing that it's coming from anywhere else other than the web server. Can there be something on my local machine that would override the list provided by the web server?

I'm using IE11 (required by this website) with Silverlight (also required). Have tried on Windows 10 and 8.1.


Harry Zisko
I need to find a combination of Python, Selenium and Browser version that works best on Windows 7.  I have tried Python 3.5.2, Selenium 2.53.6 and various versions of Firefox but none of them will do what I need done.

I just need to select all of a page in front of me that is a cms page so it isn't showing the html tags or body tags.  The source code isn't a solution either because it is a cms page it leaves out just the content in the window that selenium navigates to. I have tried xpath and that is why I am thinking I need to make sure they all 3 work together well because it shows the following error.

(ff2-32) C:\Users\Randal J. Watkins\ff2>python expertsbrazil_clean.py
Traceback (most recent call last):
  File "expertsbrazil_clean.py", line 82, in <module>
    button = driver.wait.until(EC.visibility_of_element_located((By.CLASS_NAME,
  File "C:\Users\RANDAL~1.WAT\Envs\ff2-32\lib\site-packages\selenium\webdriver\s
upport\wait.py", line 80, in until
    raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
    at FirefoxDriver.prototype.findElementInternal_ (file:///C:/Users/RANDAL~1.W
    at FirefoxDriver.prototype.findElement (file:///C:/Users/RANDAL~1.WAT/AppDat

Open in new window

Hi, There is a way to install CVS plugin without using eclipse marketplace. For security reasons I can´t access to it. Can I get the files and copy to a eclipse folder?

Web Components

Web Components are a set of standards currently being produced by Google engineers as a W3C specification that allow for the creation of reusable widgets or components in web documents and web applications. The intention behind them is to bring component-based software engineering to the World Wide Web. The components model allows for encapsulation and interoperability of individual HTML elements. Support for Web Components is present in Chrome and Opera and is in Firefox (with a manual configuration change). Microsoft's Internet Explorer has not implemented any Web Components specifications yet. Web Components consist of 4 main elements: Custom Elements, Shadow DOM, HTML Imports and HTML Templates.

Top Experts In
Web Components