Want to win a PS4? Go Premium and enter to win our High-Tech Treats giveaway. Enter to Win

x

NoSQL Databases

124

Solutions

246

Contributors

A NoSQL database provides a mechanism for storage and retrieval of data which is modeled in means other than the tabular relations used in relational databases. Motivations for this approach include: simplicity of design, simpler "horizontal" scaling to clusters of machines and finer control over availability. The data structures used by NoSQL databases (e.g. key-value, wide column, graph, or document) are specified from those used by default in relational databases, making some operations faster in NoSQL. Sometimes the data structures used by NoSQL databases are also viewed as "more flexible" than relational database tables.

Share tech news, updates, or what's on your mind.

Sign up to Post

I am about to start to play around with the Bitcoin open source, source code, and got introduced to a new term: a Blockchain database.

What technologies are needed for this? Is this NO-SQL?

Thanks.
0
Free Tool: Site Down Detector
LVL 10
Free Tool: Site Down Detector

Helpful to verify reports of your own downtime, or to double check a downed website you are trying to access.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Hello All,

I am reading a book called express.js Blueprints.  I am trying to wrap my mind around understanding authentication using passport.  serializing and deserializing is not registering to me.  I have just started learning node and express js so that's a big reason why.

Here's a code from the book on setting up passport.  Starting with line 5, can someone please break down what's happening?  Where is the "user" parameter coming from in the serializeUser function?  Where did "user.id" come from?

var passport = require('passport');
var LocalStrategy = require('passport-local').Strategy;
var User = require('mongoose').model('User');

passport.serializeUser(function(user, done) {
done(null, user.id);
});
passport.deserializeUser(function(id, done) {
User.findById(id, done);
});

passport.use(new LocalStrategy(function(email, password, done) {
User.findOne({
email: email
}, function(err, user) {
if (err) return done(err);
if (!user) {
return authFail(done);
}
if (!user.validPassword(password)) {
return authFail(done);
}
return done(null, user);
});
}));

Open in new window

0
I am trying to connect to MongoDB in mongoDBatlas from my javascript but I keep getting the following error.

MongoError: connection 5 to isaaccluster-shard-00-02-yng8g.mongodb.net:27017 closed
    at Function.MongoError.create (C:\Users\558642\ga\js-dc-5\11-crud-and-dbs\assignment\todo\node_modules\mongodb-core\lib\error.js:29:11)
    at TLSSocket.<anonymous> (C:\Users\558642\ga\js-dc-5\11-crud-and-dbs\assignment\todo\node_modules\mongodb-core\lib\connection\connection.js:202:22)
    at Object.onceWrapper (events.js:293:19)
    at emitOne (events.js:101:20)
    at TLSSocket.emit (events.js:191:7)
    at _handle.close (net.js:511:12)
    at Socket.done (_tls_wrap.js:332:7)
    at Object.onceWrapper (events.js:293:19)
    at emitOne (events.js:96:13)
    at Socket.emit (events.js:191:7)
    at TCP._handle.close [as _onclose] (net.js:511:12)


Here's my code
const express = require('express')
const hbs = require('express-handlebars')
const mongoose = require('mongoose')
const bodyParser = require('body-parser')

mongoose.connect('mongodb://XXXXXXXXXXXX@isaaccluster-shard-00-00-yng8g.mongodb.net:27017,isaaccluster-shard-00-01-yng8g.mongodb.net:27017,isaaccluster-shard-00-02-yng8g.mongodb.net:27017/<DATABASE>?ssl=true&replicaSet=IsaacCluster-shard-0&authSource=admin')

const itemEntry = require('./models/toDoEntry.js')
const app = express()

app.get('/', function( req, res ) {
	itemEntry.find({}, function( err, itemEntries ) {
			res.render('todoList',

Open in new window

0
Hi there,

I have a mongo aggregation query that works fine in the mongo shell (and robomongo) but I can not work out how to translate this into a PHP query.

I am using PHP 5.6 with the latest mongo class (MongoDB\Driver\Query).  The mongo query looks like this:

db.products.aggregate(
   [
     {$match: {
             vendor_name : "vendor8",
             distributor_id : 8
         }
     },    
     { $sort: { 
         cw_product_code: 1, download_Date: 1 
         } 
     },
     { $group:
         {
           _id: "$cw_product_id",
           lastDownloadDate: { $last: "$download_Date" },
         }
     }        
   ],
     {allowDiskUse: true}     
)

Open in new window


Any help to point me in the right direction would be appreciated.
0
hi, i have the following document on mongodb

name: "john",
state: "GA",
city: [
     {"atlanta", 30350},
     {"atlanta", 30351},
     {"atlanta", 30352},
     {"marietta", 45093}
]


how do i aggregate the array of citys and get a document like this:

name: "john",
state: "GA",
city: [
     {"atlanta", "30350, 30351, 30352"},
     {"marietta", "45093"}
]
0
Hello there,

I am a java developer and am very new to Linux and Cassandra. I am using Ubunto 16.04 and have installed cassandra v3.9.0 from datastax site. Now when i run the bin/cqlsh cmd from the cassandra directory i get error
No appropriate python interpreter found.
.

TO make sure I have python i tried this cmd python -v and then i get this message
The program python ca nbe found in the folowing packages:

Do i have python by default??? Please help.

cheers
Zolf
0
Hello  Experts,

I would like to know if the approach below can help us building an  online application where  One can upload Videos and comments from a   registered user.  we have choosen  rdbms like  mysql and no sql database like Mongo db database. In the rdbms we would be adding   Master data  such as registered table, country  And region   which would be static . The transactional data such as uploading the video, comments  etc in the No sql database like mongo db, With this approach are we going to  find a better managing the data or any other approach which yield in better result.. Infrastructure wise
we will be deploying in the AMAZON WEB SERVICE , idea is to have where agility, performance, and scalability reign supreme.



Thanks


Roy,,,
0
Hi,

Assuming i have an parent class that I filter on various properties, one of which is a property that is an array of items . Now say that i want to only return the parent item if my array of items as above a min value and below a max value ...that's fine i can work that bit out; What if i then want to then sort on the filtered result set of those items

I made a c# fiddle example to show what im trying to achieve : https://dotnetfiddle.net/mV4d28  (note that foo2 is returned first even though foo1 has items in its array that are less that those in foo2)

As i need to do this using a index i need the index to be able to compute the order by based on the filter criteria used in my query.

I know elasticsearch has an inner hits function that dose this and mongo has pipelines which also dose this so im sure Raven must have a way of doing this too ?

I was hoping using just index and a transform with prams i could achieve this so I tried it:

my index and transform look like this

public class familyTransfrom : AbstractTransformerCreationTask<ParentItem>
{
    public class Result : ParentItem{
        public double[] ChildItemValuesFiltered { get; set; }
    }
    public familyTransfrom(){
        TransformResults = parents => from parent in parents
        let filterMinValue = Convert.ToDouble(ParameterOrDefault("FilterMinValue", Convert.ToDouble(0)).Value<double>())
        let filterMaxValue = 

Open in new window

0
I am currently trying to run a script that I have written however it constantly hangs halfway through, this has worked on the odd occasion but more often than not it gets nowhere. Here is the script:

:: load metadata only

impdp  *username*/******@Server1 exclude=user REMAP_SCHEMA=SC_MUTBLDN:SC_MUTBLDN schemas=SC_MUTBLDN CONTENT=METADATA_ONLY dumpfile=ALL_METADATA_DAILY.dmp logfile=ALL_METADATA_DAILY.log



:: load data only

impdp  *username*/******@Server1  Full=Y EXCLUDE=TABLE:"LIKE'CRAC%'"  dumpfile=ALL_DATA_DAILY.DMP table_exists_action=replace  logfile=ALL_DATA_DAILY.log

impdp  *username*/******@Server1  EXCLUDE=TABLE:"LIKE'CRAC%'"  REMAP_SCHEMA=SC_MUTBLDN:SC_MUTBLDN   dumpfile=ALL_DATA_DAILY.DMP table_exists_action=replace  logfile=ALL_DATA_DAILY.log


:: dump data minus crac tables

impdp *username*/******@Server1 tables=CRAC_TYPE dumpfile=CRAC_TYPE.dmp table_exists_action=replace logfile=CRAC_TYPE.log
impdp *username*/******@Server1 tables=CRAC_KEYWORD dumpfile=CRAC_KEYWORD.dmp table_exists_action=replace logfile=CRAC_KEYWORD.log
impdp *username*/******@Server1 tables=CRAC_DEALTYPE dumpfile=CRAC_DEALTYPE.dmp table_exists_action=replace logfile=CRAC_DEALTYPE.log
impdp *username*/******@Server1 tables=CRAC_TABLENAME dumpfile=CRAC_TABLENAME.dmp table_exists_action=replace logfile=CRAC_TABLENAME.log
impdp *username*/******@Server1 tables=CRAC_DATABASENAME dumpfile=CRAC_DATABASENAME.dmp table_exists_action=replace logfile=CRAC_DATABASENAME.log
0
I have a 3 node datastax cassandra(Community) cluster with huge data. I have few tables which contain 3-5 billion records in them. I want to delete data that is older than 90 days from those tables.

The problem is how do i run a select query which runs without timeout. I am currently running below query

NOW=$(date -d "-3 month" +"%Y-%m-%d")
select day_ts from table_name where minute_ts < '$NOW' LIMIT 100000 ALLOW FILTERING;


Even if i limit the select query result, it will still parse the whole 3-5 billion records and then filter the data.

Please suggest what can be a efficient way to do this.
0
NFR key for Veeam Backup for Microsoft Office 365
LVL 1
NFR key for Veeam Backup for Microsoft Office 365

Veeam is happy to provide a free NFR license (for 1 year, up to 10 users). This license allows for the non‑production use of Veeam Backup for Microsoft Office 365 in your home lab without any feature limitations.

Is MarkLogic v 5 FIPs compatible/enabled? I can't find anything online for this version.
0
I have a financial project which receive real time stock data from some data vendor , save it into mysql database, then retrieve the data and send to the end user browser. The client software provided by the data vendor used to receive stock data is a program written by c/c++ running on the server.  this client can save the data into the mysql database(does not have to be mysql, could be switched to any other database). In order to retrieve the data from the database as quickly as possible, any framework can I use? heard about CES or ESP? spark streaming? any of them can be used for my project?  if not, how can I only retrieve the un-read data from the database as soon as it reach the database? the stock data feed is probably about maxium1000 records(my wild guess, might not be correct)  a second. see the sample below.

+---------------------+--------+-------------------+-------------+
| insertTime                  | symbal | trade_time                 | trade_price |
+---------------------+--------+-------------------+-------------+
| 2016-09-15 04:00:00 | AAPL   | 20160915040000017 |      111.70 |
| 2016-09-15 04:00:00 | AAPL   | 20160915040000017 |      111.70 |
| 2016-09-15 04:00:00 | AAPL   | 20160915040000200 |      111.69 |
| 2016-09-15 04:00:00 | AAPL   | 20160915040000200 |      111.69 |
| 2016-09-15 04:00:00 | AAPL   | 20160915040000272 |      111.51 |
| 2016-09-15 04:01:14 | AAPL   | 20160915040113878 |      111.57 |
| 2016-09-15 04:01:14 | AAPL   …
0
Hello

Configuring shared: What will happen if we lose a node? do we lose the entire base?
• What is happening in case of write conflict? are we able to rebuild the database from a source (Primary storage)?

Thanks

Regards
0
I am about to begin a large research project using natural language processing and web crawling. So, I am wondering about what AWS can offer for a scalable platform to undertake what may be a large amount of processing. I can see the advantages of having them handle everything for me. But I can also see that hardware costs can be super cheap.

I have no real experience setting up my own network or building my own desktop machines, so, I would certainly lose time on configurations.
 
What are the pros and cons of either plan?


Thanks.
0
Hi we have a client running 2 somewhat large databases on the Firebird platform.

My understanding is its an open source variant of SQL.

The database files are basically like this:
 
DATABASE1.FDB    = 2Gig
DATABASE1STORAGE.FBD   = 65Gig

I have been trying to find a nice utility to create daily BAK files of both of these databases.

DATABASE1.BAK
DATABASE1STORAGE.BAK

We then want to back these up offsite daily,  rather than try to Shadow copy the FDBS etc

Anyone have experience here?
Thanks
0
I am moving into Big Data Industry with new jobs. Just wanted to know if there are any links for me to get some real time scripting examples.
0
Hello , if someone can comment and explain for me this technically what is code is doing what behind logic and more importantly the once i did comment on top in this code below and about mongo db how can be treated, thanks

<?php

error_reporting(1);

ini_set("log_errors", 1);
ini_set("error_log", "/tmp/php-error.log");
$date = date("Y-m-d H:i:s");
error_log("$date: Hello! Running script /roadyo_base.php" . PHP_EOL);

require('../Models/config.php');
require('../Models/Pubnub.php');

$pubnub = new Pubnub(PUBNUB_PUBLISH_KEY, PUBNUB_SUBSCRIBE_KEY);
//can anyone explain for me technically what is going on with MONGO DB
$con = new MongoClient("mongodb://" . MONGODB_HOST . ":" . MONGODB_PORT . "");
$db = $con->selectDB(MONGODB_DB);
if (MONGODB_USER != '' && MONGODB_PASS != '')
    $db->authenticate(MONGODB_USER, MONGODB_PASS);

$favourite = $db->selectCollection('favourite');
$booking_route = $db->selectCollection('booking_route');

$location = $db->selectCollection('location');

$location->ensureIndex(array("location" => "2d"));

$use = array('pubnub' => $pubnub, 'location' => $location, 'favourite' => $favourite, 'booking_route' => $booking_route, 'db' => $db);

$pubnub->subscribe(array(
    "channel" => APP_PUBNUB_CHANNEL,
    "callback" => function($message) use($use) {
        //what (int) means
        $a = (int) $message['message']['a'];

        $args = $message['message'];

        if ($a == 4) { //update driver location
            if ($args['devId'] == '')
     

Open in new window

0
A newbie question: it seems that mongo does not by default describe a hierarchical structure. So if i were using something like xpath in xml, i could find the parents of a giving node. Am i correct in thinking that you need to 'build-in' tree-like relationships (with say 'parent') to be able to do that?
0
I have mongoDB collection and for the documents added to the collection I used timestamp field to capture the current time stamp with this format:
2016-03-19T15:49:46-05:00

I am looking for a find query to return all documents added to data base in past 24 hours.

I would appreciate your help.
0
Get your Disaster Recovery as a Service basics
LVL 1
Get your Disaster Recovery as a Service basics

Disaster Recovery as a Service is one go-to solution that revolutionizes DR planning. Implementing DRaaS could be an efficient process, easily accessible to non-DR experts. Learn about monitoring, testing, executing failovers and failbacks to ensure a "healthy" DR environment.

Hi Experts,

Is there anything other than sqlite that could be used to store records really easily for a webapp?  Something entity-driven would be great - or anything super simple?

Thanks!
Mike
0
I've managed to install MongoDB, and have it running successfully on my Windows 10 system. The one thing I found surprising, was that I could not run mongod.exe nor mongo.exe from GitBash, so I am wondering why that might be the case?

Note: I can run mongod.exe and mongo.exe via the Windows Command Prompt. The reason I prefer GitBash is because it allows me to zoom in, thus making the text more viewable.
0
I usually use sql server and sql server management studio.

I'm new to MongoDB.

I installed MongoDB. Then with windows command prompt I verified that I installed it correctly.
I walked through the tutorial of installation on the MongoDB website.

Does MongoDB have a graphical user interface tool like sql server does in Sql Server Management Studio that I can use on a Windows pc?
0
What are some of the technologies that facebook / twitter use to store and search their massive data graphs?
I'm looking for something that will have capabilities to search massive amounts of data that will be growing constantly.
0
MongoDB collection insert document fails

I'm not sure why its failing.

db.post3.insert([
{
      title: 'MongoDB Overview', 
      description: 'MongoDB is no sql database',
      by: 'tutorials point',
      url: 'http://www.tutorialspoint.com',
      tags: ['mongodb', 'database','NoSQL'],
      likes: 100
},
{
      title: 'NoSQL Database', 
      description: 'NoSQL database doesn't have tables',
      by: 'tutorials point',
      url: 'http://www.tutorialspoint.com',
      tags: ['mongodb', 'database','NoSQL'],
      likes: 20, 
      comments: [  
      {
            user:'user1',
            message: 'My first comment',
            dateCreated: new Date(2013,11,10,2,35),
            like: 0 
     }
   ]
}
])

Open in new window

fail.jpg
0
Hi Experts,

I am using Redis Cache
First time only my method will hit DB and get the values
Next time if i want same values i am going to get it from redis cache manager.

problem is
some time redis server going down.
that time my code breaking down.

i want to do check like

if the server is down skip getting the values from the cache manager,need to get it from DB.
even when i moved to other environment like dev environment to production
if the redis server not installed that time also my code should not break.

can some one suggest me how to do?

 @Autowired
    private CacheManager<UserGroupToGroupType> cacheManager;
    
    @Override
    public List<UserGroupToGroupTypeDTO> getGroupForCustomerIdAndGroupType(String customerId, Integer groupTypeId) {
        UserGroupToGroupType groupToGroupType=cacheManager.findById(customerId);
        if(groupToGroupType!=null){
            return groupToGroupType.getUserGroupToGroupTypeDTOs();
        }
        List<Object[]> namedFiltersList = userGrpToGrpDao.getGroupForCustomerIdAndGroupType(customerId, groupTypeId,
                UserGroupToGroupTypeDAOImpl.Readset.ID_GID_GNAME.getName());
        return UserGroupToGroupTypeServiceHelper.convertDotoDTOforID_GID_GNAME(namedFiltersList,cacheManager,customerId);
    }
}

Open in new window

0

NoSQL Databases

124

Solutions

246

Contributors

A NoSQL database provides a mechanism for storage and retrieval of data which is modeled in means other than the tabular relations used in relational databases. Motivations for this approach include: simplicity of design, simpler "horizontal" scaling to clusters of machines and finer control over availability. The data structures used by NoSQL databases (e.g. key-value, wide column, graph, or document) are specified from those used by default in relational databases, making some operations faster in NoSQL. Sometimes the data structures used by NoSQL databases are also viewed as "more flexible" than relational database tables.

Top Experts In
NoSQL Databases
<
Monthly
>