Celebrate National IT Professionals Day with 3 months of free Premium Membership. Use Code ITDAY17

x

NoSQL Databases

124

Solutions

242

Contributors

A NoSQL database provides a mechanism for storage and retrieval of data which is modeled in means other than the tabular relations used in relational databases. Motivations for this approach include: simplicity of design, simpler "horizontal" scaling to clusters of machines and finer control over availability. The data structures used by NoSQL databases (e.g. key-value, wide column, graph, or document) are specified from those used by default in relational databases, making some operations faster in NoSQL. Sometimes the data structures used by NoSQL databases are also viewed as "more flexible" than relational database tables.

Share tech news, updates, or what's on your mind.

Sign up to Post

I am about to start to play around with the Bitcoin open source, source code, and got introduced to a new term: a Blockchain database.

What technologies are needed for this? Is this NO-SQL?

Thanks.
0
Moving data to the cloud? Find out if you’re ready
LVL 3
Moving data to the cloud? Find out if you’re ready

Before moving to the cloud, it is important to carefully define your db needs, plan for the migration & understand prod. environment. This wp explains how to define what you need from a cloud provider, plan for the migration & what putting a cloud solution into practice entails.

Hello All,

I am reading a book called express.js Blueprints.  I am trying to wrap my mind around understanding authentication using passport.  serializing and deserializing is not registering to me.  I have just started learning node and express js so that's a big reason why.

Here's a code from the book on setting up passport.  Starting with line 5, can someone please break down what's happening?  Where is the "user" parameter coming from in the serializeUser function?  Where did "user.id" come from?

var passport = require('passport');
var LocalStrategy = require('passport-local').Strategy;
var User = require('mongoose').model('User');

passport.serializeUser(function(user, done) {
done(null, user.id);
});
passport.deserializeUser(function(id, done) {
User.findById(id, done);
});

passport.use(new LocalStrategy(function(email, password, done) {
User.findOne({
email: email
}, function(err, user) {
if (err) return done(err);
if (!user) {
return authFail(done);
}
if (!user.validPassword(password)) {
return authFail(done);
}
return done(null, user);
});
}));

Open in new window

0
Hi,

Im new to CB and I have the following documents that have these fields: (among others)
  "_class": "com.wrapper",
  "dbProfile": {
         "operatorId": "3026",
          "profileType": "BEL_PR_LT"
}

I have several combinations of the pair dbProfile.operatorId/dbProfile.profileType among the documents but all have the _class the same.

I wanted to run a query that would display the all the possible combinations of dbProfile.operatorId/dbProfile.profileType only once (even if it appears more than once  - DISTINCT in SQL).

How would this look like in a CB query format? My bucket name is apoiu.

Tks,
J
0
A technology that is kind of a fad,but will probably last but be not as popular are the no sql databases. They have there place for certain applications, but I don't believe they will replace SQL databases. Having to know how you are going to query before you store your data is just not realistic.  
3
I just installed ElasticSearch, MongoDB, GrayLog2 but getting an error and unable to access the web interface. Please, suggest any solution r

/etc/elasticsearch
# ls -lrtha
drwxr-x---.  2 root elasticsearch    6 Apr 24 16:29 scripts
-rwxr-x---.  1 root elasticsearch 2.6K Apr 24 16:04 logging.yml
-rwxr-x---.  1 root elasticsearch 3.2K Jul 20 05:05 elasticsearch.yml


/var/log/graylog-server/server.log

2017-07-20T07:50:36.694Z INFO  [CmdLineTool] Loaded plugin: Elastic Beats Input 2.2.3 [org.graylog.plugins.beats.BeatsInputPlugin]
2017-07-20T07:50:36.696Z INFO  [CmdLineTool] Loaded plugin: Collector 2.2.3 [org.graylog.plugins.collector.CollectorPlugin]
2017-07-20T07:50:36.699Z INFO  [CmdLineTool] Loaded plugin: Enterprise Integration Plugin 2.2.3 [org.graylog.plugins.enterprise_integration.EnterpriseIntegrationPlugin]
2017-07-20T07:50:36.700Z INFO  [CmdLineTool] Loaded plugin: MapWidgetPlugin 2.2.3 [org.graylog.plugins.map.MapWidgetPlugin]
2017-07-20T07:50:36.708Z INFO  [CmdLineTool] Loaded plugin: Pipeline Processor Plugin 2.2.3 [org.graylog.plugins.pipelineprocessor.ProcessorPlugin]
2017-07-20T07:50:36.709Z INFO  [CmdLineTool] Loaded plugin: Anonymous Usage Statistics 2.2.3 [org.graylog.plugins.usagestatistics.UsageStatsPlugin]
2017-07-20T07:50:36.813Z ERROR [CmdLineTool] Invalid configuration
com.github.joschi.jadconfig.ValidationException: Cannot read file elasticsearch_config_file at path /etc/elasticsearch/elasticsearch.yml. Please specify the correct …
0
In this article, I’ll look at how you can use a backup to start a secondary instance for MongoDB.
0
Hi,

Please bear with me, my Mongo knowledge is minimal and my google foo has failed me after an hour or two.

I have data in Mongo that looks as follows:

{
    "_id" : ObjectId("xxxx"),
    "applicationData" : {
        "groups" : [
            "group1",
                  "group2",
        ],
        "status" : "ok"
    },
    "appid" : "UID0001",
}

{
    "_id" : ObjectId("yyyy"),
    "applicationData" : {
        "groups" : [
                  "group2"
        ],
        "status" : "ok"
    },
    "appid" : "UID0001",
}

From this I'm wanting to group by the content of "groups" and the appid an get a count - the result would be as follows:

group1  UID0001 1
group2  UID0001 2

One step further would be that I'd only like it to report where the "count" is greater than 1

How can I achieve this?

I guess, in SQL, it would be:

select groups, appid, count(*) from db having count(*) > 1

I'm not having much (any) success, further complicated with one of the values I'm wanting to group by is in an Array.

Thanks in advance
0
I am trying to connect to MongoDB in mongoDBatlas from my javascript but I keep getting the following error.

MongoError: connection 5 to isaaccluster-shard-00-02-yng8g.mongodb.net:27017 closed
    at Function.MongoError.create (C:\Users\558642\ga\js-dc-5\11-crud-and-dbs\assignment\todo\node_modules\mongodb-core\lib\error.js:29:11)
    at TLSSocket.<anonymous> (C:\Users\558642\ga\js-dc-5\11-crud-and-dbs\assignment\todo\node_modules\mongodb-core\lib\connection\connection.js:202:22)
    at Object.onceWrapper (events.js:293:19)
    at emitOne (events.js:101:20)
    at TLSSocket.emit (events.js:191:7)
    at _handle.close (net.js:511:12)
    at Socket.done (_tls_wrap.js:332:7)
    at Object.onceWrapper (events.js:293:19)
    at emitOne (events.js:96:13)
    at Socket.emit (events.js:191:7)
    at TCP._handle.close [as _onclose] (net.js:511:12)


Here's my code
const express = require('express')
const hbs = require('express-handlebars')
const mongoose = require('mongoose')
const bodyParser = require('body-parser')

mongoose.connect('mongodb://XXXXXXXXXXXX@isaaccluster-shard-00-00-yng8g.mongodb.net:27017,isaaccluster-shard-00-01-yng8g.mongodb.net:27017,isaaccluster-shard-00-02-yng8g.mongodb.net:27017/<DATABASE>?ssl=true&replicaSet=IsaacCluster-shard-0&authSource=admin')

const itemEntry = require('./models/toDoEntry.js')
const app = express()

app.get('/', function( req, res ) {
	itemEntry.find({}, function( err, itemEntries ) {
			res.render('todoList',

Open in new window

0
I am using php and mongoDb. I am new to mongoDb, but fine with php.  In my mongoDb database I have a  collection called 'users' with a number of fields such as firstname, lastname etc. There are a number of users.


I also have a field/collection in some users called web_links which contains an array.
ie: within my users document:
{
    "_id" : ObjectId("587af11ec09cf31a1955ed92"),
    "username" : "mike",
    "firstname" : "Mike",
    "lastname" : "Tester",
    "email" : "mike@test.com",
    "web_links" : [
        {
            "name" : "google",
            "link" : "https://google.com",
            "status" : "1",
            "added" : ISODate("1970-01-18T08:00:57.600Z")
        },
        {
            "name" : "yahoo",
            "link" : "https://yahoo.com",
            "status" : "1",
            "added" : ISODate("1970-01-18T08:00:57.600Z")
        }
    ]
}

I am trying to firstly update the status of an array item.
I have got as far as this ->

$name = 'google';
$newstatus = '0';

$this->database->users->updateOne(['_id'=>new MongoDB\BSON\ObjectId($userid)],['web_links.name'=>$name],['$set'=>['web_links.$.status'=>$newstatus]]);

This is returning the following error via firebug:
Uncaught exception 'MongoDB\Exception\InvalidArgumentException' with message 'First
 key in $update argument is not an update operator'.

I have tried to change $set to $push, but that is not the answer.

I would be very grateful for assistance …
0
I am learning Angular, Node, npm, deployd, mongodb, etc.  I am making my way through a book titled "Pro AngularJS" by Adam Freeman.

on page 120 I am attempting to prepare the data for a "real world" application called "sportsstore".

I was instructed to install a module called "deployd" which apparently is used for modelling  API's for web applications.

I did that and when I try to start start the "deployd" service I get an error:

C:\PROGRA~2\deployd>dpd -p 5500 sportsstore\app.dpd --mongod
starting deployd v0.8.9...
internal/child_process.js:289
  var err = this._handle.spawn(options);
                         ^

TypeError: Bad argument
    at TypeError (native)
    at ChildProcess.spawn (internal/child_process.js:289:26)
    at exports.spawn (child_process.js:380:9)
    at Object.exports.restart (C:\Users\knowlton\AppData\Roaming\npm\node_modules\deployd\lib\util\mongod.js:38:14)
    at Command.start (C:\Users\knowlton\AppData\Roaming\npm\node_modules\deployd\bin\dpd:149:16)
    at Command.listener (C:\Users\knowlton\AppData\Roaming\npm\node_modules\deployd\node_modules\commander\index.js:301:8)
    at emitOne (events.js:96:13)
    at Command.emit (events.js:188:7)
    at Command.parseArgs (C:\Users\knowlton\AppData\Roaming\npm\node_modules\deployd\node_modules\commander\index.js:617:12)
    at Command.parse (C:\Users\knowlton\AppData\Roaming\npm\node_modules\deployd\node_modules\commander\index.js:458:21)
    at Object.<anonymous> 

Open in new window

0
Will your db performance match your db growth?
LVL 3
Will your db performance match your db growth?

In Percona’s white paper “Performance at Scale: Keeping Your Database on Its Toes,” we take a high-level approach to what you need to think about when planning for database scalability.

This post looks at MongoDB and MySQL, and covers high-level MongoDB strengths, weaknesses, features, and uses from the perspective of an SQL user.
0
In this series, we will discuss common questions received as a database Solutions Engineer at Percona. In this role, we speak with a wide array of MySQL and MongoDB users responsible for both extremely large and complex environments to smaller single-server environments.
0
This post contains step-by-step instructions for setting up alerting in Percona Monitoring and Management (PMM) using Grafana.
0
Recently I was talking with Tim Sharp, one of my colleagues from our Technical Account Manager team about MongoDB’s scalability. While doing some quick training with some of the Percona team, Tim brought something to my attention...
0
Hi there,

I have a mongo aggregation query that works fine in the mongo shell (and robomongo) but I can not work out how to translate this into a PHP query.

I am using PHP 5.6 with the latest mongo class (MongoDB\Driver\Query).  The mongo query looks like this:

db.products.aggregate(
   [
     {$match: {
             vendor_name : "vendor8",
             distributor_id : 8
         }
     },    
     { $sort: { 
         cw_product_code: 1, download_Date: 1 
         } 
     },
     { $group:
         {
           _id: "$cw_product_id",
           lastDownloadDate: { $last: "$download_Date" },
         }
     }        
   ],
     {allowDiskUse: true}     
)

Open in new window


Any help to point me in the right direction would be appreciated.
0
hi, i have the following document on mongodb

name: "john",
state: "GA",
city: [
     {"atlanta", 30350},
     {"atlanta", 30351},
     {"atlanta", 30352},
     {"marietta", 45093}
]


how do i aggregate the array of citys and get a document like this:

name: "john",
state: "GA",
city: [
     {"atlanta", "30350, 30351, 30352"},
     {"marietta", "45093"}
]
0
I’m looking for a good architecture to be able to efficiently query data currently stored in NoSQL dbs (specifically DocumentDB).

We have a number of microservices that manage various entities (say client, product etc). Each store their data locally (in DocumentDB). We want to create another microservice that provides the ability for real time (latency on the order of seconds) ad-hoc queries over this data.

One option is to replicate all this data and store it in an SQL db, and build the query service on top of it. I expect this would make the queries quite fast, especially if we index all columns. (Of course, since this data keeps changing, we’d listen to a message queue for db updates.)

Is this the best way? How do companies go about building ad-hoc query functionality of NoSQL data? This seems like a problem that many large companies would have to solve. (I am new to NoSQL and microservice architecture, so I might be missing something basic.) Any suggestion/alternatives are appreciated.
0
Hello Experts,

Any advice on how to export data from Apache Cassandra using local windows machine?

Please assist
thank you
0
Is it just me or is NoSQL Overapplied? It seems like in an effort to be as trendy as possible people shoehorn obviously relational problems into non relational schema. If you have to start throwing foreign keys on your objects I think you need to rethink your solution. Sarah Mei makes a good point about this in her article:

http://www.sarahmei.com/blog/2013/11/11/why-you-should-never-use-mongodb/
4
 
LVL 14

Expert Comment

by:Phil Phillips
It was definitely over applied initially. Though, I think it's gotten better over the years. Part of it, too, is that a lot of the relational databases started incorporating NoSQL-like features (i.e. PostgreSQL and their JSON capabilities).
3
Enroll in September's Course of the Month
LVL 10
Enroll in September's Course of the Month

This month’s featured course covers 16 hours of training in installation, management, and deployment of VMware vSphere virtualization environments. It's free for Premium Members, Team Accounts, and Qualified Experts!

4
Hello there,

I am a java developer and am very new to Linux and Cassandra. I am using Ubunto 16.04 and have installed cassandra v3.9.0 from datastax site. Now when i run the bin/cqlsh cmd from the cassandra directory i get error
No appropriate python interpreter found.
.

TO make sure I have python i tried this cmd python -v and then i get this message
The program python ca nbe found in the folowing packages:

Do i have python by default??? Please help.

cheers
Zolf
0
Hi,

I would like to know how to ensure primary key in the dynamo db. is there is any way to do that,,, Any example which can
help us.

roy..
0
Hello  Experts,

I would like to know if the approach below can help us building an  online application where  One can upload Videos and comments from a   registered user.  we have choosen  rdbms like  mysql and no sql database like Mongo db database. In the rdbms we would be adding   Master data  such as registered table, country  And region   which would be static . The transactional data such as uploading the video, comments  etc in the No sql database like mongo db, With this approach are we going to  find a better managing the data or any other approach which yield in better result.. Infrastructure wise
we will be deploying in the AMAZON WEB SERVICE , idea is to have where agility, performance, and scalability reign supreme.



Thanks


Roy,,,
0
Hi,

Assuming i have an parent class that I filter on various properties, one of which is a property that is an array of items . Now say that i want to only return the parent item if my array of items as above a min value and below a max value ...that's fine i can work that bit out; What if i then want to then sort on the filtered result set of those items

I made a c# fiddle example to show what im trying to achieve : https://dotnetfiddle.net/mV4d28  (note that foo2 is returned first even though foo1 has items in its array that are less that those in foo2)

As i need to do this using a index i need the index to be able to compute the order by based on the filter criteria used in my query.

I know elasticsearch has an inner hits function that dose this and mongo has pipelines which also dose this so im sure Raven must have a way of doing this too ?

I was hoping using just index and a transform with prams i could achieve this so I tried it:

my index and transform look like this

public class familyTransfrom : AbstractTransformerCreationTask<ParentItem>
{
    public class Result : ParentItem{
        public double[] ChildItemValuesFiltered { get; set; }
    }
    public familyTransfrom(){
        TransformResults = parents => from parent in parents
        let filterMinValue = Convert.ToDouble(ParameterOrDefault("FilterMinValue", Convert.ToDouble(0)).Value<double>())
        let filterMaxValue = 

Open in new window

0
I am currently trying to run a script that I have written however it constantly hangs halfway through, this has worked on the odd occasion but more often than not it gets nowhere. Here is the script:

:: load metadata only

impdp  *username*/******@Server1 exclude=user REMAP_SCHEMA=SC_MUTBLDN:SC_MUTBLDN schemas=SC_MUTBLDN CONTENT=METADATA_ONLY dumpfile=ALL_METADATA_DAILY.dmp logfile=ALL_METADATA_DAILY.log



:: load data only

impdp  *username*/******@Server1  Full=Y EXCLUDE=TABLE:"LIKE'CRAC%'"  dumpfile=ALL_DATA_DAILY.DMP table_exists_action=replace  logfile=ALL_DATA_DAILY.log

impdp  *username*/******@Server1  EXCLUDE=TABLE:"LIKE'CRAC%'"  REMAP_SCHEMA=SC_MUTBLDN:SC_MUTBLDN   dumpfile=ALL_DATA_DAILY.DMP table_exists_action=replace  logfile=ALL_DATA_DAILY.log


:: dump data minus crac tables

impdp *username*/******@Server1 tables=CRAC_TYPE dumpfile=CRAC_TYPE.dmp table_exists_action=replace logfile=CRAC_TYPE.log
impdp *username*/******@Server1 tables=CRAC_KEYWORD dumpfile=CRAC_KEYWORD.dmp table_exists_action=replace logfile=CRAC_KEYWORD.log
impdp *username*/******@Server1 tables=CRAC_DEALTYPE dumpfile=CRAC_DEALTYPE.dmp table_exists_action=replace logfile=CRAC_DEALTYPE.log
impdp *username*/******@Server1 tables=CRAC_TABLENAME dumpfile=CRAC_TABLENAME.dmp table_exists_action=replace logfile=CRAC_TABLENAME.log
impdp *username*/******@Server1 tables=CRAC_DATABASENAME dumpfile=CRAC_DATABASENAME.dmp table_exists_action=replace logfile=CRAC_DATABASENAME.log
0

NoSQL Databases

124

Solutions

242

Contributors

A NoSQL database provides a mechanism for storage and retrieval of data which is modeled in means other than the tabular relations used in relational databases. Motivations for this approach include: simplicity of design, simpler "horizontal" scaling to clusters of machines and finer control over availability. The data structures used by NoSQL databases (e.g. key-value, wide column, graph, or document) are specified from those used by default in relational databases, making some operations faster in NoSQL. Sometimes the data structures used by NoSQL databases are also viewed as "more flexible" than relational database tables.

Top Experts In
NoSQL Databases
<
Monthly
>